Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
- What Is a Research Design | Types, Guide & Examples
What Is a Research Design | Types, Guide & Examples
Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.
A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about:
- Your overall research objectives and approach
- Whether you’ll rely on primary research or secondary research
- Your sampling methods or criteria for selecting subjects
- Your data collection methods
- The procedures you’ll follow to collect data
- Your data analysis methods
A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.
Table of contents
Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.
Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.
There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.
The first choice you need to make is whether you’ll take a qualitative or quantitative approach.
Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.
Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.
It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.
Practical and ethical considerations when designing research
As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .
- How much time do you have to collect data and write up the research?
- Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
- Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
- Will you need ethical approval ?
At each stage of the research design process, make sure that your choices are practically feasible.
Here's why students love Scribbr's proofreading services
Discover proofreading & editing
Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.
Types of quantitative research designs
Quantitative designs can be split into four main types.
- Experimental and quasi-experimental designs allow you to test cause-and-effect relationships
- Descriptive and correlational designs allow you to measure variables and describe relationships between them.
With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).
Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.
Types of qualitative research designs
Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.
The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.
Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.
In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.
Defining the population
A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.
For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?
The more precisely you define your population, the easier it will be to gather a representative sample.
- Sampling methods
Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.
To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.
Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.
For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.
Case selection in qualitative research
In some types of qualitative designs, sampling may not be relevant.
For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.
In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .
For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.
Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.
You can choose just one data collection method, or use several methods in the same study.
Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .
Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.
Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.
Other methods of data collection
There are many other ways you might collect data depending on your field and topic.
If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.
If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.
With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.
Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.
However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.
Receive feedback on language, structure, and formatting
Professional editors proofread and edit your paper by focusing on:
- Academic style
- Vague sentences
- Style consistency
See an example
As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.
Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.
Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.
If you’re using observations , which events or actions will you count?
If you’re using surveys , which questions will you ask and what range of responses will be offered?
You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.
Reliability and validity
Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.
For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.
If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.
As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.
That means making decisions about things like:
- How many participants do you need for an adequate sample size?
- What inclusion and exclusion criteria will you use to identify eligible participants?
- How will you contact your sample—by mail, online, by phone, or in person?
If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?
If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?
It’s also important to create a data management plan for organizing and storing your data.
Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.
Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).
On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.
Quantitative data analysis
In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.
Using descriptive statistics , you can summarize your sample data in terms of:
- The distribution of the data (e.g., the frequency of each score on a test)
- The central tendency of the data (e.g., the mean to describe the average score)
- The variability of the data (e.g., the standard deviation to describe how spread out the scores are)
The specific calculations you can do depend on the level of measurement of your variables.
Using inferential statistics , you can:
- Make estimates about the population based on your sample data.
- Test hypotheses about a relationship between variables.
Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.
Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.
Qualitative data analysis
In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.
Two of the most common approaches to doing this are thematic analysis and discourse analysis .
There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.
If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.
- Simple random sampling
- Stratified sampling
- Cluster sampling
- Likert scales
- Null hypothesis
- Statistical power
- Probability distribution
- Effect size
- Poisson distribution
- Optimism bias
- Cognitive bias
- Implicit bias
- Hawthorne effect
- Anchoring bias
- Explicit bias
A research design is a strategy for answering your research question . It defines your overall approach and determines how you will collect and analyze data.
A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.
Quantitative research designs can be divided into two main categories:
- Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
- Experimental and quasi-experimental designs are used to test causal relationships .
Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.
The priorities of a research design can vary depending on the field, but you usually have to specify:
- Your research questions and/or hypotheses
- Your overall approach (e.g., qualitative or quantitative )
- The type of design you’re using (e.g., a survey , experiment , or case study )
- Your data collection methods (e.g., questionnaires , observations)
- Your data collection procedures (e.g., operationalization , timing and data management)
- Your data analysis methods (e.g., statistical tests or thematic analysis )
A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
In statistics, sampling allows you to test a hypothesis about the characteristics of a population.
Operationalization means turning abstract conceptual ideas into measurable observations.
For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.
Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.
A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved November 29, 2023, from https://www.scribbr.com/methodology/research-design/
Is this article helpful?
Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, what is your plagiarism score.
Qualitative & Quantitative Research Support
- Boot Camp This link opens in a new window
- Research Process Flow Chart
- Research Alignment This link opens in a new window
- Step 1: Seek Out Evidence
- Step 2: Explain
- Step 3: The Big Picture
- Step 4: Own It
- Step 5: Illustrate
- Annotated Bibliography
- Literature Review This link opens in a new window
- Systematic Reviews & Meta-Analyses
- Dissertation and Data Analysis Group Sessions
- How to Synthesize and Analyze
- Synthesis and Analysis Practice
- Synthesis and Analysis Group Sessions
- NVivo Group and Study Sessions
- Using Qualtrics
- Statistical Analysis Group sessions
- Quantitative Research Questions
- Qualitative Research Questions
- Dissertation to Journal Article This link opens in a new window
- International Journal of Online Graduate Education (IJOGE) This link opens in a new window
- Journal of Research in Innovative Teaching & Learning (JRIT&L) This link opens in a new window
Writing a Case Study
What is a case study?
A Case study is:
- An in-depth research design that primarily uses a qualitative methodology but sometimes includes quantitative methodology.
- Used to examine an identifiable problem confirmed through research.
- Used to investigate an individual, group of people, organization, or event.
- Used to mostly answer "how" and "why" questions.
What are the different types of case studies?
Note: These are the primary case studies. As you continue to research and learn
about case studies you will begin to find a robust list of different types.
Who are your case study participants?
What is triangulation ?
Validity and credibility are an essential part of the case study. Therefore, the researcher should include triangulation to ensure trustworthiness while accurately reflecting what the researcher seeks to investigate.
How to write a Case Study?
When developing a case study, there are different ways you could present the information, but remember to include the five parts for your case study.
Was this resource helpful?
- << Previous: Qualitative Research Questions
- Next: Journal Article Reporting Standards (JARS) >>
- Last Updated: Nov 20, 2023 2:40 PM
- URL: https://resources.nu.edu/researchtools
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, automatically generate references for free.
- Knowledge Base
- Case Study | Definition, Examples & Methods
Case Study | Definition, Examples & Methods
Published on 5 May 2022 by Shona McCombes . Revised on 30 January 2023.
A case study is a detailed study of a specific subject, such as a person, group, place, event, organisation, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.
A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating, and understanding different aspects of a research problem .
Table of contents
When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyse the case.
A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.
Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.
You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.
Prevent plagiarism, run a free check.
Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:
- Provide new or unexpected insights into the subject
- Challenge or complicate existing assumptions and theories
- Propose practical courses of action to resolve a problem
- Open up new directions for future research
Unlike quantitative or experimental research, a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.
If you find yourself aiming to simultaneously investigate and solve an issue, consider conducting action research . As its name suggests, action research conducts research and takes action at the same time, and is highly iterative and flexible.
However, you can also choose a more common or representative case to exemplify a particular category, experience, or phenomenon.
While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:
- Exemplify a theory by showing how it explains the case under investigation
- Expand on a theory by uncovering new concepts and ideas that need to be incorporated
- Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions
To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.
There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews, observations, and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data .
The aim is to gain as thorough an understanding as possible of the case and its context.
In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.
How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis, with separate sections or chapters for the methods , results , and discussion .
Others are written in a more narrative style, aiming to explore the case from various angles and analyse its meanings and implications (for example, by using textual analysis or discourse analysis ).
In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
McCombes, S. (2023, January 30). Case Study | Definition, Examples & Methods. Scribbr. Retrieved 28 November 2023, from https://www.scribbr.co.uk/research-methods/case-studies/
Is this article helpful?
Other students also liked, correlational research | guide, design & examples, a quick guide to experimental design | 5 steps & examples, descriptive research design | definition, methods & examples.
Home » Case Study – Methods, Examples and Guide
Case Study – Methods, Examples and Guide
Table of Contents
A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation.
It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied. Case studies typically involve multiple sources of data, including interviews, observations, documents, and artifacts, which are analyzed using various techniques, such as content analysis, thematic analysis, and grounded theory. The findings of a case study are often used to develop theories, inform policy or practice, or generate new research questions.
Types of Case Study
Types and Methods of Case Study are as follows:
A single-case study is an in-depth analysis of a single case. This type of case study is useful when the researcher wants to understand a specific phenomenon in detail.
For Example , A researcher might conduct a single-case study on a particular individual to understand their experiences with a particular health condition or a specific organization to explore their management practices. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a single-case study are often used to generate new research questions, develop theories, or inform policy or practice.
A multiple-case study involves the analysis of several cases that are similar in nature. This type of case study is useful when the researcher wants to identify similarities and differences between the cases.
For Example, a researcher might conduct a multiple-case study on several companies to explore the factors that contribute to their success or failure. The researcher collects data from each case, compares and contrasts the findings, and uses various techniques to analyze the data, such as comparative analysis or pattern-matching. The findings of a multiple-case study can be used to develop theories, inform policy or practice, or generate new research questions.
Exploratory Case Study
An exploratory case study is used to explore a new or understudied phenomenon. This type of case study is useful when the researcher wants to generate hypotheses or theories about the phenomenon.
For Example, a researcher might conduct an exploratory case study on a new technology to understand its potential impact on society. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as grounded theory or content analysis. The findings of an exploratory case study can be used to generate new research questions, develop theories, or inform policy or practice.
Descriptive Case Study
A descriptive case study is used to describe a particular phenomenon in detail. This type of case study is useful when the researcher wants to provide a comprehensive account of the phenomenon.
For Example, a researcher might conduct a descriptive case study on a particular community to understand its social and economic characteristics. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a descriptive case study can be used to inform policy or practice or generate new research questions.
Instrumental Case Study
An instrumental case study is used to understand a particular phenomenon that is instrumental in achieving a particular goal. This type of case study is useful when the researcher wants to understand the role of the phenomenon in achieving the goal.
For Example, a researcher might conduct an instrumental case study on a particular policy to understand its impact on achieving a particular goal, such as reducing poverty. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of an instrumental case study can be used to inform policy or practice or generate new research questions.
Case Study Data Collection Methods
Here are some common data collection methods for case studies:
Interviews involve asking questions to individuals who have knowledge or experience relevant to the case study. Interviews can be structured (where the same questions are asked to all participants) or unstructured (where the interviewer follows up on the responses with further questions). Interviews can be conducted in person, over the phone, or through video conferencing.
Observations involve watching and recording the behavior and activities of individuals or groups relevant to the case study. Observations can be participant (where the researcher actively participates in the activities) or non-participant (where the researcher observes from a distance). Observations can be recorded using notes, audio or video recordings, or photographs.
Documents can be used as a source of information for case studies. Documents can include reports, memos, emails, letters, and other written materials related to the case study. Documents can be collected from the case study participants or from public sources.
Surveys involve asking a set of questions to a sample of individuals relevant to the case study. Surveys can be administered in person, over the phone, through mail or email, or online. Surveys can be used to gather information on attitudes, opinions, or behaviors related to the case study.
Artifacts are physical objects relevant to the case study. Artifacts can include tools, equipment, products, or other objects that provide insights into the case study phenomenon.
How to conduct Case Study Research
Conducting a case study research involves several steps that need to be followed to ensure the quality and rigor of the study. Here are the steps to conduct case study research:
- Define the research questions: The first step in conducting a case study research is to define the research questions. The research questions should be specific, measurable, and relevant to the case study phenomenon under investigation.
- Select the case: The next step is to select the case or cases to be studied. The case should be relevant to the research questions and should provide rich and diverse data that can be used to answer the research questions.
- Collect data: Data can be collected using various methods, such as interviews, observations, documents, surveys, and artifacts. The data collection method should be selected based on the research questions and the nature of the case study phenomenon.
- Analyze the data: The data collected from the case study should be analyzed using various techniques, such as content analysis, thematic analysis, or grounded theory. The analysis should be guided by the research questions and should aim to provide insights and conclusions relevant to the research questions.
- Draw conclusions: The conclusions drawn from the case study should be based on the data analysis and should be relevant to the research questions. The conclusions should be supported by evidence and should be clearly stated.
- Validate the findings: The findings of the case study should be validated by reviewing the data and the analysis with participants or other experts in the field. This helps to ensure the validity and reliability of the findings.
- Write the report: The final step is to write the report of the case study research. The report should provide a clear description of the case study phenomenon, the research questions, the data collection methods, the data analysis, the findings, and the conclusions. The report should be written in a clear and concise manner and should follow the guidelines for academic writing.
Examples of Case Study
Here are some examples of case study research:
- The Hawthorne Studies : Conducted between 1924 and 1932, the Hawthorne Studies were a series of case studies conducted by Elton Mayo and his colleagues to examine the impact of work environment on employee productivity. The studies were conducted at the Hawthorne Works plant of the Western Electric Company in Chicago and included interviews, observations, and experiments.
- The Stanford Prison Experiment: Conducted in 1971, the Stanford Prison Experiment was a case study conducted by Philip Zimbardo to examine the psychological effects of power and authority. The study involved simulating a prison environment and assigning participants to the role of guards or prisoners. The study was controversial due to the ethical issues it raised.
- The Challenger Disaster: The Challenger Disaster was a case study conducted to examine the causes of the Space Shuttle Challenger explosion in 1986. The study included interviews, observations, and analysis of data to identify the technical, organizational, and cultural factors that contributed to the disaster.
- The Enron Scandal: The Enron Scandal was a case study conducted to examine the causes of the Enron Corporation’s bankruptcy in 2001. The study included interviews, analysis of financial data, and review of documents to identify the accounting practices, corporate culture, and ethical issues that led to the company’s downfall.
- The Fukushima Nuclear Disaster : The Fukushima Nuclear Disaster was a case study conducted to examine the causes of the nuclear accident that occurred at the Fukushima Daiichi Nuclear Power Plant in Japan in 2011. The study included interviews, analysis of data, and review of documents to identify the technical, organizational, and cultural factors that contributed to the disaster.
Application of Case Study
Case studies have a wide range of applications across various fields and industries. Here are some examples:
Business and Management
Case studies are widely used in business and management to examine real-life situations and develop problem-solving skills. Case studies can help students and professionals to develop a deep understanding of business concepts, theories, and best practices.
Case studies are used in healthcare to examine patient care, treatment options, and outcomes. Case studies can help healthcare professionals to develop critical thinking skills, diagnose complex medical conditions, and develop effective treatment plans.
Case studies are used in education to examine teaching and learning practices. Case studies can help educators to develop effective teaching strategies, evaluate student progress, and identify areas for improvement.
Case studies are widely used in social sciences to examine human behavior, social phenomena, and cultural practices. Case studies can help researchers to develop theories, test hypotheses, and gain insights into complex social issues.
Law and Ethics
Case studies are used in law and ethics to examine legal and ethical dilemmas. Case studies can help lawyers, policymakers, and ethical professionals to develop critical thinking skills, analyze complex cases, and make informed decisions.
Purpose of Case Study
The purpose of a case study is to provide a detailed analysis of a specific phenomenon, issue, or problem in its real-life context. A case study is a qualitative research method that involves the in-depth exploration and analysis of a particular case, which can be an individual, group, organization, event, or community.
The primary purpose of a case study is to generate a comprehensive and nuanced understanding of the case, including its history, context, and dynamics. Case studies can help researchers to identify and examine the underlying factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and detailed understanding of the case, which can inform future research, practice, or policy.
Case studies can also serve other purposes, including:
- Illustrating a theory or concept: Case studies can be used to illustrate and explain theoretical concepts and frameworks, providing concrete examples of how they can be applied in real-life situations.
- Developing hypotheses: Case studies can help to generate hypotheses about the causal relationships between different factors and outcomes, which can be tested through further research.
- Providing insight into complex issues: Case studies can provide insights into complex and multifaceted issues, which may be difficult to understand through other research methods.
- Informing practice or policy: Case studies can be used to inform practice or policy by identifying best practices, lessons learned, or areas for improvement.
Advantages of Case Study Research
There are several advantages of case study research, including:
- In-depth exploration: Case study research allows for a detailed exploration and analysis of a specific phenomenon, issue, or problem in its real-life context. This can provide a comprehensive understanding of the case and its dynamics, which may not be possible through other research methods.
- Rich data: Case study research can generate rich and detailed data, including qualitative data such as interviews, observations, and documents. This can provide a nuanced understanding of the case and its complexity.
- Holistic perspective: Case study research allows for a holistic perspective of the case, taking into account the various factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and comprehensive understanding of the case.
- Theory development: Case study research can help to develop and refine theories and concepts by providing empirical evidence and concrete examples of how they can be applied in real-life situations.
- Practical application: Case study research can inform practice or policy by identifying best practices, lessons learned, or areas for improvement.
- Contextualization: Case study research takes into account the specific context in which the case is situated, which can help to understand how the case is influenced by the social, cultural, and historical factors of its environment.
Limitations of Case Study Research
There are several limitations of case study research, including:
- Limited generalizability : Case studies are typically focused on a single case or a small number of cases, which limits the generalizability of the findings. The unique characteristics of the case may not be applicable to other contexts or populations, which may limit the external validity of the research.
- Biased sampling: Case studies may rely on purposive or convenience sampling, which can introduce bias into the sample selection process. This may limit the representativeness of the sample and the generalizability of the findings.
- Subjectivity: Case studies rely on the interpretation of the researcher, which can introduce subjectivity into the analysis. The researcher’s own biases, assumptions, and perspectives may influence the findings, which may limit the objectivity of the research.
- Limited control: Case studies are typically conducted in naturalistic settings, which limits the control that the researcher has over the environment and the variables being studied. This may limit the ability to establish causal relationships between variables.
- Time-consuming: Case studies can be time-consuming to conduct, as they typically involve a detailed exploration and analysis of a specific case. This may limit the feasibility of conducting multiple case studies or conducting case studies in a timely manner.
- Resource-intensive: Case studies may require significant resources, including time, funding, and expertise. This may limit the ability of researchers to conduct case studies in resource-constrained settings.
About the author
Researcher, Academic Writer, Web developer
You may also like
Qualitative Research – Methods, Analysis Types...
Focus Groups – Steps, Examples and Guide
Basic Research – Types, Methods and Examples
Ethnographic Research -Types, Methods and Guide
Qualitative Research Methods
Research Methods – Types, Examples and Guide
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Account settings
- Advanced Search
- Journal List
- J Med Libr Assoc
- v.107(1); 2019 Jan
Distinguishing case study as a research method from case reports as a publication type
The purpose of this editorial is to distinguish between case reports and case studies. In health, case reports are familiar ways of sharing events or efforts of intervening with single patients with previously unreported features. As a qualitative methodology, case study research encompasses a great deal more complexity than a typical case report and often incorporates multiple streams of data combined in creative ways. The depth and richness of case study description helps readers understand the case and whether findings might be applicable beyond that setting.
Single-institution descriptive reports of library activities are often labeled by their authors as “case studies.” By contrast, in health care, single patient retrospective descriptions are published as “case reports.” Both case reports and case studies are valuable to readers and provide a publication opportunity for authors. A previous editorial by Akers and Amos about improving case studies addresses issues that are more common to case reports; for example, not having a review of the literature or being anecdotal, not generalizable, and prone to various types of bias such as positive outcome bias [ 1 ]. However, case study research as a qualitative methodology is pursued for different purposes than generalizability. The authors’ purpose in this editorial is to clearly distinguish between case reports and case studies. We believe that this will assist authors in describing and designating the methodological approach of their publications and help readers appreciate the rigor of well-executed case study research.
Case reports often provide a first exploration of a phenomenon or an opportunity for a first publication by a trainee in the health professions. In health care, case reports are familiar ways of sharing events or efforts of intervening with single patients with previously unreported features. Another type of study categorized as a case report is an “N of 1” study or single-subject clinical trial, which considers an individual patient as the sole unit of observation in a study investigating the efficacy or side effect profiles of different interventions. Entire journals have evolved to publish case reports, which often rely on template structures with limited contextualization or discussion of previous cases. Examples that are indexed in MEDLINE include the American Journal of Case Reports , BMJ Case Reports, Journal of Medical Case Reports, and Journal of Radiology Case Reports . Similar publications appear in veterinary medicine and are indexed in CAB Abstracts, such as Case Reports in Veterinary Medicine and Veterinary Record Case Reports .
As a qualitative methodology, however, case study research encompasses a great deal more complexity than a typical case report and often incorporates multiple streams of data combined in creative ways. Distinctions include the investigator’s definitions and delimitations of the case being studied, the clarity of the role of the investigator, the rigor of gathering and combining evidence about the case, and the contextualization of the findings. Delimitation is a term from qualitative research about setting boundaries to scope the research in a useful way rather than describing the narrow scope as a limitation, as often appears in a discussion section. The depth and richness of description helps readers understand the situation and whether findings from the case are applicable to their settings.
CASE STUDY AS A RESEARCH METHODOLOGY
Case study as a qualitative methodology is an exploration of a time- and space-bound phenomenon. As qualitative research, case studies require much more from their authors who are acting as instruments within the inquiry process. In the case study methodology, a variety of methodological approaches may be employed to explain the complexity of the problem being studied [ 2 , 3 ].
Leading authors diverge in their definitions of case study, but a qualitative research text introduces case study as follows:
Case study research is defined as a qualitative approach in which the investigator explores a real-life, contemporary bounded system (a case) or multiple bound systems (cases) over time, through detailed, in-depth data collection involving multiple sources of information, and reports a case description and case themes. The unit of analysis in the case study might be multiple cases (a multisite study) or a single case (a within-site case study). [ 4 ]
Methodologists writing core texts on case study research include Yin [ 5 ], Stake [ 6 ], and Merriam [ 7 ]. The approaches of these three methodologists have been compared by Yazan, who focused on six areas of methodology: epistemology (beliefs about ways of knowing), definition of cases, design of case studies, and gathering, analysis, and validation of data [ 8 ]. For Yin, case study is a method of empirical inquiry appropriate to determining the “how and why” of phenomena and contributes to understanding phenomena in a holistic and real-life context [ 5 ]. Stake defines a case study as a “well-bounded, specific, complex, and functioning thing” [ 6 ], while Merriam views “the case as a thing, a single entity, a unit around which there are boundaries” [ 7 ].
Case studies are ways to explain, describe, or explore phenomena. Comments from a quantitative perspective about case studies lacking rigor and generalizability fail to consider the purpose of the case study and how what is learned from a case study is put into practice. Rigor in case studies comes from the research design and its components, which Yin outlines as (a) the study’s questions, (b) the study’s propositions, (c) the unit of analysis, (d) the logic linking the data to propositions, and (e) the criteria for interpreting the findings [ 5 ]. Case studies should also provide multiple sources of data, a case study database, and a clear chain of evidence among the questions asked, the data collected, and the conclusions drawn [ 5 ].
Sources of evidence for case studies include interviews, documentation, archival records, direct observations, participant-observation, and physical artifacts. One of the most important sources for data in qualitative case study research is the interview [ 2 , 3 ]. In addition to interviews, documents and archival records can be gathered to corroborate and enhance the findings of the study. To understand the phenomenon or the conditions that created it, direct observations can serve as another source of evidence and can be conducted throughout the study. These can include the use of formal and informal protocols as a participant inside the case or an external or passive observer outside of the case [ 5 ]. Lastly, physical artifacts can be observed and collected as a form of evidence. With these multiple potential sources of evidence, the study methodology includes gathering data, sense-making, and triangulating multiple streams of data. Figure 1 shows an example in which data used for the case started with a pilot study to provide additional context to guide more in-depth data collection and analysis with participants.
Key sources of data for a sample case study
VARIATIONS ON CASE STUDY METHODOLOGY
Case study methodology is evolving and regularly reinterpreted. Comparative or multiple case studies are used as a tool for synthesizing information across time and space to research the impact of policy and practice in various fields of social research [ 9 ]. Because case study research is in-depth and intensive, there have been efforts to simplify the method or select useful components of cases for focused analysis. Micro-case study is a term that is occasionally used to describe research on micro-level cases [ 10 ]. These are cases that occur in a brief time frame, occur in a confined setting, and are simple and straightforward in nature. A micro-level case describes a clear problem of interest. Reporting is very brief and about specific points. The lack of complexity in the case description makes obvious the “lesson” that is inherent in the case; although no definitive “solution” is necessarily forthcoming, making the case useful for discussion. A micro-case write-up can be distinguished from a case report by its focus on briefly reporting specific features of a case or cases to analyze or learn from those features.
DATABASE INDEXING OF CASE REPORTS AND CASE STUDIES
Disciplines such as education, psychology, sociology, political science, and social work regularly publish rich case studies that are relevant to particular areas of health librarianship. Case reports and case studies have been defined as publication types or subject terms by several databases that are relevant to librarian authors: MEDLINE, PsycINFO, CINAHL, and ERIC. Library, Information Science & Technology Abstracts (LISTA) does not have a subject term or publication type related to cases, despite many being included in the database. Whereas “Case Reports” are the main term used by MEDLINE’s Medical Subject Headings (MeSH) and PsycINFO’s thesaurus, CINAHL and ERIC use “Case Studies.”
Case reports in MEDLINE and PsycINFO focus on clinical case documentation. In MeSH, “Case Reports” as a publication type is specific to “clinical presentations that may be followed by evaluative studies that eventually lead to a diagnosis” [ 11 ]. “Case Histories,” “Case Studies,” and “Case Study” are all entry terms mapping to “Case Reports”; however, guidance to indexers suggests that “Case Reports” should not be applied to institutional case reports and refers to the heading “Organizational Case Studies,” which is defined as “descriptions and evaluations of specific health care organizations” [ 12 ].
PsycINFO’s subject term “Case Report” is “used in records discussing issues involved in the process of conducting exploratory studies of single or multiple clinical cases.” The Methodology index offers clinical and non-clinical entries. “Clinical Case Study” is defined as “case reports that include disorder, diagnosis, and clinical treatment for individuals with mental or medical illnesses,” whereas “Non-clinical Case Study” is a “document consisting of non-clinical or organizational case examples of the concepts being researched or studied. The setting is always non-clinical and does not include treatment-related environments” [ 13 ].
Both CINAHL and ERIC acknowledge the depth of analysis in case study methodology. The CINAHL scope note for the thesaurus term “Case Studies” distinguishes between the document and the methodology, though both use the same term: “a review of a particular condition, disease, or administrative problem. Also, a research method that involves an in-depth analysis of an individual, group, institution, or other social unit. For material that contains a case study, search for document type: case study.” The ERIC scope note for the thesaurus term “Case Studies” is simple: “detailed analyses, usually focusing on a particular problem of an individual, group, or organization” [ 14 ].
PUBLICATION OF CASE STUDY RESEARCH IN LIBRARIANSHIP
We call your attention to a few examples published as case studies in health sciences librarianship to consider how their characteristics fit with the preceding definitions of case reports or case study research. All present some characteristics of case study research, but their treatment of the research questions, richness of description, and analytic strategies vary in depth and, therefore, diverge at some level from the qualitative case study research approach. This divergence, particularly in richness of description and analysis, may have been constrained by the publication requirements.
As one example, a case study by Janke and Rush documented a time- and context-bound collaboration involving a librarian and a nursing faculty member [ 15 ]. Three objectives were stated: (1) describing their experience of working together on an interprofessional research team, (2) evaluating the value of the librarian role from librarian and faculty member perspectives, and (3) relating findings to existing literature. Elements that signal the qualitative nature of this case study are that the authors were the research participants and their use of the term “evaluation” is reflection on their experience. This reads like a case study that could have been enriched by including other types of data gathered from others engaging with this team to broaden the understanding of the collaboration.
As another example, the description of the academic context is one of the most salient components of the case study written by Clairoux et al., which had the objectives of (1) describing the library instruction offered and learning assessments used at a single health sciences library and (2) discussing the positive outcomes of instruction in that setting [ 16 ]. The authors focus on sharing what the institution has done more than explaining why this institution is an exemplar to explore a focused question or understand the phenomenon of library instruction. However, like a case study, the analysis brings together several streams of data including course attendance, online material page views, and some discussion of results from surveys. This paper reads somewhat in between an institutional case report and a case study.
The final example is a single author reporting on a personal experience of creating and executing the role of research informationist for a National Institutes of Health (NIH)–funded research team [ 17 ]. There is a thoughtful review of the informationist literature and detailed descriptions of the institutional context and the process of gaining access to and participating in the new role. However, the motivating question in the abstract does not seem to be fully addressed through analysis from either the reflective perspective of the author as the research participant or consideration of other streams of data from those involved in the informationist experience. The publication reads more like a case report about this informationist’s experience than a case study that explores the research informationist experience through the selection of this case.
All of these publications are well written and useful for their intended audiences, but in general, they are much shorter and much less rich in depth than case studies published in social sciences research. It may be that the authors have been constrained by word counts or page limits. For example, the submission category for Case Studies in the Journal of the Medical Library Association (JMLA) limited them to 3,000 words and defined them as “articles describing the process of developing, implementing, and evaluating a new service, program, or initiative, typically in a single institution or through a single collaborative effort” [ 18 ]. This definition’s focus on novelty and description sounds much more like the definition of case report than the in-depth, detailed investigation of a time- and space-bound problem that is often examined through case study research.
Problem-focused or question-driven case study research would benefit from the space provided for Original Investigations that employ any type of quantitative or qualitative method of analysis. One of the best examples in the JMLA of an in-depth multiple case study that was authored by a librarian who published the findings from her doctoral dissertation represented all the elements of a case study. In eight pages, she provided a theoretical basis for the research question, a pilot study, and a multiple case design, including integrated data from interviews and focus groups [ 19 ].
We have distinguished between case reports and case studies primarily to assist librarians who are new to research and critical appraisal of case study methodology to recognize the features that authors use to describe and designate the methodological approaches of their publications. For researchers who are new to case research methodology and are interested in learning more, Hancock and Algozzine provide a guide [ 20 ].
We hope that JMLA readers appreciate the rigor of well-executed case study research. We believe that distinguishing between descriptive case reports and analytic case studies in the journal’s submission categories will allow the depth of case study methodology to increase. We also hope that authors feel encouraged to pursue submitting relevant case studies or case reports for future publication.
Editor’s note: In response to this invited editorial, the Journal of the Medical Library Association will consider manuscripts employing rigorous qualitative case study methodology to be Original Investigations (fewer than 5,000 words), whereas manuscripts describing the process of developing, implementing, and assessing a new service, program, or initiative—typically in a single institution or through a single collaborative effort—will be considered to be Case Reports (formerly known as Case Studies; fewer than 3,000 words).
- Write Paper
- Social Anxiety
Case Study Research Design
The case study research design have evolved over the past few years as a useful tool for investigating trends and specific situations in many scientific disciplines.
This article is a part of the guide:
- Research Designs
- Quantitative and Qualitative Research
- Literature Review
- Quantitative Research Design
- Descriptive Research
Browse Full Outline
- 1 Research Designs
- 2.1 Pilot Study
- 2.2 Quantitative Research Design
- 2.3 Qualitative Research Design
- 2.4 Quantitative and Qualitative Research
- 3.1 Case Study
- 3.2 Naturalistic Observation
- 3.3 Survey Research Design
- 3.4 Observational Study
- 4.1 Case-Control Study
- 4.2 Cohort Study
- 4.3 Longitudinal Study
- 4.4 Cross Sectional Study
- 4.5 Correlational Study
- 5.1 Field Experiments
- 5.2 Quasi-Experimental Design
- 5.3 Identical Twins Study
- 6.1 Experimental Design
- 6.2 True Experimental Design
- 6.3 Double Blind Experiment
- 6.4 Factorial Design
- 7.1 Literature Review
- 7.2 Systematic Reviews
- 7.3 Meta Analysis
The case study has been especially used in social science, psychology, anthropology and ecology.
This method of study is especially useful for trying to test theoretical models by using them in real world situations. For example, if an anthropologist were to live amongst a remote tribe, whilst their observations might produce no quantitative data, they are still useful to science.
What is a Case Study?
Basically, a case study is an in depth study of a particular situation rather than a sweeping statistical survey . It is a method used to narrow down a very broad field of research into one easily researchable topic.
Whilst it will not answer a question completely, it will give some indications and allow further elaboration and hypothesis creation on a subject.
The case study research design is also useful for testing whether scientific theories and models actually work in the real world. You may come out with a great computer model for describing how the ecosystem of a rock pool works but it is only by trying it out on a real life pool that you can see if it is a realistic simulation.
For psychologists, anthropologists and social scientists they have been regarded as a valid method of research for many years. Scientists are sometimes guilty of becoming bogged down in the general picture and it is sometimes important to understand specific cases and ensure a more holistic approach to research .
H.M.: An example of a study using the case study research design.
The Argument for and Against the Case Study Research Design
Some argue that because a case study is such a narrow field that its results cannot be extrapolated to fit an entire question and that they show only one narrow example. On the other hand, it is argued that a case study provides more realistic responses than a purely statistical survey.
The truth probably lies between the two and it is probably best to try and synergize the two approaches. It is valid to conduct case studies but they should be tied in with more general statistical processes.
For example, a statistical survey might show how much time people spend talking on mobile phones, but it is case studies of a narrow group that will determine why this is so.
The other main thing to remember during case studies is their flexibility. Whilst a pure scientist is trying to prove or disprove a hypothesis , a case study might introduce new and unexpected results during its course, and lead to research taking new directions.
The argument between case study and statistical method also appears to be one of scale. Whilst many 'physical' scientists avoid case studies, for psychology, anthropology and ecology they are an essential tool. It is important to ensure that you realize that a case study cannot be generalized to fit a whole population or ecosystem.
Finally, one peripheral point is that, when informing others of your results, case studies make more interesting topics than purely statistical surveys, something that has been realized by teachers and magazine editors for many years. The general public has little interest in pages of statistical calculations but some well placed case studies can have a strong impact.
How to Design and Conduct a Case Study
The advantage of the case study research design is that you can focus on specific and interesting cases. This may be an attempt to test a theory with a typical case or it can be a specific topic that is of interest. Research should be thorough and note taking should be meticulous and systematic.
The first foundation of the case study is the subject and relevance. In a case study, you are deliberately trying to isolate a small study group, one individual case or one particular population.
For example, statistical analysis may have shown that birthrates in African countries are increasing. A case study on one or two specific countries becomes a powerful and focused tool for determining the social and economic pressures driving this.
In the design of a case study, it is important to plan and design how you are going to address the study and make sure that all collected data is relevant. Unlike a scientific report, there is no strict set of rules so the most important part is making sure that the study is focused and concise; otherwise you will end up having to wade through a lot of irrelevant information.
It is best if you make yourself a short list of 4 or 5 bullet points that you are going to try and address during the study. If you make sure that all research refers back to these then you will not be far wrong.
With a case study, even more than a questionnaire or survey , it is important to be passive in your research. You are much more of an observer than an experimenter and you must remember that, even in a multi-subject case, each case must be treated individually and then cross case conclusions can be drawn .
How to Analyze the Results
Analyzing results for a case study tends to be more opinion based than statistical methods. The usual idea is to try and collate your data into a manageable form and construct a narrative around it.
Use examples in your narrative whilst keeping things concise and interesting. It is useful to show some numerical data but remember that you are only trying to judge trends and not analyze every last piece of data. Constantly refer back to your bullet points so that you do not lose focus.
It is always a good idea to assume that a person reading your research may not possess a lot of knowledge of the subject so try to write accordingly.
In addition, unlike a scientific study which deals with facts, a case study is based on opinion and is very much designed to provoke reasoned debate. There really is no right or wrong answer in a case study.
- Psychology 101
- Flags and Countries
- Capitals and Countries
Martyn Shuttleworth (Apr 1, 2008). Case Study Research Design. Retrieved Nov 29, 2023 from Explorable.com: https://explorable.com/case-study-research-design
You Are Allowed To Copy The Text
The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .
This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.
That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).
Want to stay up to date? Follow us!
Get all these articles in 1 guide.
Want the full version to study at home, take to school or just scribble on?
Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.
Download electronic versions: - Epub for mobiles and tablets - PDF version here
Save this course for later
Don't have time for it all now? No problem, save it as a course and come back to it later.
- Subscribe to our RSS Feed
- Like us on Facebook
- Follow us on Twitter
- Oskar Blakstad Blog
- Oskar Blakstad on Twitter
- Technical Support
- Find My Rep
You are here
Case Study Research and Applications Design and Methods
- Robert K. Yin - COSMOS Corporation
Winner of the 2019 McGuffey Longevity Award from the Textbook & Academic Authors Association (TAA)
Recognized as one of the most cited methodology books in the social sciences, the Sixth Edition of Robert K. Yin's bestselling text provides a complete portal to the world of case study research. With the integration of 11 applications in this edition, the book gives readers access to exemplary case studies drawn from a wide variety of academic and applied fields. Ultimately, Case Study Research and Applications will guide students in the successful use and application of the case study research method.
See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .
For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.
SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com
Password-protected Instructor Resources include the following:
- An expanded glossary provided by the author in the form of downloadable Briefs.
- Additional tutorials written by the author which correspond to Chapters 1, 2, 3, 5, and 6.
- A selection of author Robert Yin's SAGE journal articles.
- Tables and figures from the book available for download.
“The book is filled with tips to the researcher on how to master the craft of doing research overall and specifically how to account for multi-layered cases.”
“Yin covers all of the basic and advanced knowledge for conducting case study and why they are useful for specific research studies without getting lost in the weeds.”
“The applications enhance the original material because it gives the reader concrete examples.”
“Yin is much more in-depth on case study methods both within a general qualitative text and any other case study text I have seen.”
On demand used as recommendation for basic literature for case study research
An essential reading for people doing case studies.
very thoruogh introduction
Very good introduction to Case Study design. I have used case study approach for my PhD study. I would recommend this book for an indepth understanding of case study design for research projects.
Dr Siew Lee School of Nursing, Midwifery and Paramedic Practice Robert Gordon University, Aberdeen.
The book is a really good introduction to case study research and is full of useful examples. I will recommend as the definitive source for students interested in pursuing this further in their projects.
In our Doctor of Ministerial Leadership (DML), Case Study is the Methodology that is required in this program. Robert Yin's book provides the foundational knowledge needed to conduct research using his Case Study design.
NEW TO THIS EDITION:
- Includes 11 in-depth applications that show how researchers have implemented case study methods successfully.
- Increases reference to relativist and constructivist approaches to case study research, as well as how case studies can be part of mixed methods projects.
- Places greater emphasis on using plausible rival explanations to bolster case study quality.
- Discusses synthesizing findings across case studies in a multiple-case study in more detail.
- Adds an expanded list of 15 fields that have text or texts devoted to case study research.
- Sharpens discussion of distinguishing research from non-research case studies.
- The author brings to light at least three remaining gaps to be filled in the future:
- how rival explanations can become more routinely integrated into all case study research;
- the difference between case-based and variable-based approaches to designing and analyzing case studies; and
- the relationship between case study research and qualitative research.
- Numerous conceptual exercises, illustrative exhibits, vignettes, and a glossary make the book eminently accessible.
- Boxes throughout offer more in-depth real-world examples of research.
- Short, sidebar tips help succinctly explain concepts and allow students to check their understanding.
- Exercises throughout offer students the chance to immediately apply their knowledge.
Sample Materials & Chapters
Preface: Spotlighting "Case Study Research"
Chapter 1: Getting Started
Select a purchasing option, related products.
How to choose your study design
- 1 Department of Medicine, Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia.
- PMID: 32479703
- DOI: 10.1111/jpc.14929
Research designs are broadly divided into observational studies (i.e. cross-sectional; case-control and cohort studies) and experimental studies (randomised control trials, RCTs). Each design has a specific role, and each has both advantages and disadvantages. Moreover, while the typical RCT is a parallel group design, there are now many variants to consider. It is important that both researchers and paediatricians are aware of the role of each study design, their respective pros and cons, and the inherent risk of bias with each design. While there are numerous quantitative study designs available to researchers, the final choice is dictated by two key factors. First, by the specific research question. That is, if the question is one of 'prevalence' (disease burden) then the ideal is a cross-sectional study; if it is a question of 'harm' - a case-control study; prognosis - a cohort and therapy - a RCT. Second, by what resources are available to you. This includes budget, time, feasibility re-patient numbers and research expertise. All these factors will severely limit the choice. While paediatricians would like to see more RCTs, these require a huge amount of resources, and in many situations will be unethical (e.g. potentially harmful intervention) or impractical (e.g. rare diseases). This paper gives a brief overview of the common study types, and for those embarking on such studies you will need far more comprehensive, detailed sources of information.
Keywords: experimental studies; observational studies; research method.
© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).
- Case-Control Studies
- Cross-Sectional Studies
- Research Design*
This paper is in the following e-collection/theme issue:
Published on 27.11.2023 in Vol 7 (2023)
Design and Implementation of Survey Quality Control System for Qatar’s First National Mental Health Survey: Case Study
Authors of this article:
- Catalina Petcu, MA ;
- Ikram Boukhelif, BSc ;
- Veena Davis, PhD ;
- Hamda Shamsi, BA ;
- Marwa Al-Assi, BSc ;
- Anis Miladi, PMP ;
- Salma M Khaled, PhD
Social and Economic Survey Research Institute, Qatar University, Doha, Qatar
Catalina Petcu, MA
Social and Economic Survey Research Institute
Qatar University Street
Doha, PO Box 2713
Phone: 974 4403 7290
Email: [email protected]
Background: All World Mental Health (WMH) Surveys apply high standards of data quality. To date, most of the published quality control (QC) procedures for these surveys were in relation to face-to-face interviews. However, owing to the social restrictions that emerged from the COVID-19 pandemic, telephone interviews are the most effective alternative for conducting complex probability-based large-scale surveys.
Objective: In this paper, we present the QC system implemented in the WMH Qatar Survey, the first WMH Survey conducted during the COVID-19 pandemic in the Middle East. The objective of the QC process was to acquire high data quality through the reduction of random errors and bias in data collection.
Methods: The QC design and procedures in this study were adapted to the telephone survey mode in response to the COVID-19 pandemic. We focus on the design of the QC indicator system and its implementation, including the investigation process, monitoring interviewers’ performance during survey fielding and applying quality-informed interventions.
Results: The study team investigated 11,035 flags triggered during the 2 waves of the survey data collection. The most triggered flags were related to short question administration duration and multiple visits to the same survey questions or responses. Live monitoring of the interviews helped in understanding why certain duration-related flags were triggered and the interviewing patterns of the interviewers. Corrective and preventive actions were taken against interviewers’ behaviors based on the investigation of triggered flags per interviewer and live call monitoring of interviews. Although, in most cases, the interviewers required refresher training sessions and feedback to improve their performance, several interviewers discontinued work because of low productivity and a high number of triggered flags.
Conclusions: The specific QC procedures implemented in the course of the WMH Qatar Survey were essential for successfully meeting the target number of interviews (N=5000). The QC strategies and the new indicators customized for telephone interviews contributed to the flag investigation and verification process. The QC data presented in this study shed light on the rigorous methods and quality monitoring processes in the course of conducting a large-scale national survey on sensitive topics during the COVID-19 pandemic.
Survey quality is determined by the accuracy, reliability, and validity of data [ 1 ]. Measures implemented to acquire high data quality define the quality control (QC) process, which aims at the detection and reduction of errors in data collection as well as the prevention of practices that do not follow the study protocol. QC is defined as planned efforts to monitor, verify, and analyze the quality of data as it is being collected, thus enabling continuous quality improvement during data collection [ 2 ]. The World Mental Health (WMH) Surveys implement a high standard of QC [ 1 , 3 - 6 ] to reduce errors and unacceptable practices, including falsification in the data collection. In several countries where the WMH Survey was conducted, such as the United States, China, Germany, Lebanon, New Zealand, and Spain, data falsifications have been reported [ 1 , 4 ]. Falsifications included making up all or part of an interview, miscoding the answer, reporting the wrong case disposition, and interviewing a nonsampled individual [ 7 ]. Therefore, the implementation of QC procedures plays an important role in conducting complex community-based surveys.
To date, WMH Surveys have been conducted in >30 nations, including Arab countries such as Lebanon, Iraq, and Saudi Arabia [ 8 - 14 ]. Scholars have presented global perspectives on the epidemiology of mental disorders and provided detailed information about QC in some of the WMH studies [ 15 ]. However, the QC procedures for most of these surveys were in relation to in-person interviews or face-to-face survey mode. Moreover, most of these studies were based on the paper-and-pencil version of the Composite International Diagnostic Interview (CIDI) and not on the computer-assisted personal interviewing (CAPI) version of the CIDI. CIDI was designed by the World Health Organization to assess for psychiatric disorders and associated risk factors, and treatment interventions. The latter version of the CIDI was a greater improvement than the original paper-and-pencil version, which was much longer to administer and was more prone to skip logic and data entry errors. Furthermore, administration using computer technology necessitated a different form of QC procedures. Nevertheless, several published papers from the WMH countries that used CAPI technology focused mainly on survey methodology and implementation, whereas QC processes or procedures were rarely mentioned [ 6 , 9 , 16 , 17 ]. QC of computer-assisted telephone interviewing (CATI) operations in Canada [ 18 ] tracked and evaluated the interviewer’s performance using quality indicators. In 1996, Mudryk et al [ 18 ] identified the interviewer’s problems and explained the quality procedure for CATI implementation and methodology. The Singapore Mental Health Study [ 19 ] demonstrated the design and methodology of the research, including quality of survey administration, field staff selection, productivity, and data analysis. There is a short paragraph in the Iranian Mental Health Survey [ 17 ] regarding QC of the fieldwork.
In the Middle East, large-scale mental health surveys rarely report the implementation of QC measures or procedures. However, scholars recommend the adoption of efficient QC tools and the development of strong teams to manage QC procedures as a priority [ 2 , 20 ]. To date, the most recent and comprehensive information on the development and implementation of the QC system was provided for the WMH Survey conducted in Saudi Arabia using CAPI mode [ 21 ]. In this study, we focused on the different phases of the QC cycle from the quality management perspective of the survey process. A study on the challenges and lessons from piloting the WMH Survey in Qatar described the development and implementation of a QC system to monitor interviewing activities in the field during the face-to-face survey pilot of the WMH Qatar study [ 22 ].
In this paper, we describe how we created and implemented a QC system for the first national mental health survey in Qatar, the WMH Qatar Survey, which is also one of the first studies in the WMH Survey consortium conducted using CATI technology during the COVID-19 pandemic. The QC design and procedures in our study were also adapted to the CATI mode of data collection. In other parts of the world, a study investigating the prevalence of suicidal thoughts and behaviors in the Spanish adult general population during the first wave of COVID-19 also used the CATI survey mode [ 23 ], whereas other studies collected data using web surveys [ 24 - 28 ]. However, none of these recent studies have focused on QC procedures.
This study provides an overall framework for implementing similar QC procedures in future large-scale population surveys, including the WMH Surveys using the CATI methodology. First, we briefly present information on the sampling design and fieldwork operations and procedures. Then, we discuss in detail the design of the QC system, implementation of quality indicators, flags investigation process, interviewers’ performance monitoring, and QC-informed interventions implemented during the fielding of the survey. At the time of writing this paper (ie, January 2022), the production and the QC process were in the final phase.
This paper aims to (1) describe the design of the QC system adapted to the CATI mode of data collection, (2) discuss the QC measures that have been implemented to detect errors in data collection and prevent practices that do not follow the study protocol, and (3) provide a framework for developing QC systems and procedures in future large-scale population surveys.
The target population of this study included Arab adults aged ≥18 years living in Qatar during the survey reference period (ie, from January 2020 to January 2021). To reach this population, the Social and Economic Survey Research Institute (SESRI) worked with local cellular phone providers to develop a cellular phone frame. A probability sample was drawn from this frame by using the listed dialing technique. The proportion of adults with a cellular phone in Qatar is approximately 98% and it was determined by a SESRI statistician for the National Omnibus Survey conducted in 2018. A sample drawn from this type of frame was expected to have excellent coverage and representation of the target population. Because the target population could not be completely identified in the frame, we had to oversample certain groups of phone numbers that were likely to belong to the target population. This oversampling was also important to avoid sampling nonworking phone numbers. The sampled phone numbers were released for interviewing in batches to ensure that the complete call procedures were followed for all phone numbers. The phone calls were made at different times during the day and on different days of the week to maximize the chances of making contact with potential respondents.
Fieldwork Operational Team and Procedures
The fieldwork team and data collection played an essential role in the QC process. The interviewers recruited for this study were trained on the survey administration protocol and data capture software. In the following sections, we present details of the training and data collection procedures, including the software used and the changes in the phone interviewing approach necessitated by the COVID-19 pandemic.
In interviewer-administered surveys, the training is considered an important part of the QC [ 29 ]. Conducting this large-scale survey by phone required a rigorous training program for the survey interviewers. To improve the performance of existing and newly recruited interviewers, training sessions were organized with the aim of enhancing the interviewers’ reading, probing, persuading, and IT skills. The training was conducted for >5 days for 4 to 5 hours each day.
A total of 3 teams were involved in the training process: the research team, the CATI laboratory team, and the IT team at SESRI, Qatar University. The training materials included frequently asked questions sheet to support answering the respondents’ questions and concerns regarding the study. The respondent booklet contained survey response options, whereas the interviewer booklet listed scenarios and instructions for the interviewers’ reviews and references, as well as scripts that were used during the practical part of the interviewers’ training. During the training sessions, the interviewers were familiarized with the questionnaire, including diagnostic and nondiagnostic modules of the CIDI. The importance and aims of the study were highlighted, as well as media coverage of the study. General interviewing techniques (eg, reading questions, probing, accurately entering respondent’s responses and comments, and addressing soft and hard refusals) were also covered by the CATI laboratory team during the training, in addition to the study’s interviewing protocol and guidelines. The training also covered information about the importance of adherence to the ethical standards of scientific research, including informed consent and confidentiality. The IT team demonstrated the use of Blaise software (CBS Statistics) for entering data and trained the interviewers on IT-related procedures. Participants in the training program who had the best performance on the scored tasks during the training were selected to become interviewers in the WMH Qatar study.
CATI Laboratory Organizational Structure
The CATI team comprised a manager, an assistant data collection specialist, 6 supervisors, and 42 interviewers. The interviewers worked 1 of 2 shifts (ie, 9 AM to 3 PM and 3 PM to 9 PM) every day of the week from Sunday to Saturday.
CATI is a telephone interviewing method in which interviewers use an electronic device, specifically a tablet, with Blaise data entry client software to read the survey script and enter the information collected and with Cisco Communicator software (Cisco Systems) to make the calls [ 30 ]. The survey used CATI to administer the CIDI 5.0 instrument during the 2 waves of production. The first wave commenced on January 20, 2021, until April 12, 2021, whereas the second wave was conducted from May 25, 2021, to July 15, 2021, and continued after the summer break from September 19, 2021, to January 2022.
The data collected were automatically sent in real time to the SESRI central server using the Case Management System (Cisco Systems). Because of Coronavirus restrictions, interviewers conducted their activities from home. Remote work required each interviewer to connect to the Global Protect Virtual Private Network (Palo Alto Networks) [ 31 ] to ensure a secure connection and to protect data transfer to Qatar University’s servers against mobile threats. Using such computerized data collection methods ensures the following: reducing item-level missing data, obtaining timely data, and collecting process data or paradata. Paradata included process data, such as call records, interviewer observations, time stamps, and call dispositions.
Because of the restrictions imposed on face-to-face interviews during the COVID-19 pandemic, it was decided to move CATI survey operations to a distributed CATI mode, as opposed to a centralized call center. In this data collection mode, because interviewers worked from home, they tended not to adhere to their working hours. Furthermore, supervisors were not able to directly monitor interviewers as they worked. To fill this gap, the SESRI IT team developed an app called CATI Time Tracker that allowed interviewers to log their work activities and breaks throughout their working shifts, which were accessible to CATI managers and supervisors. The CATI Time Tracker app also facilitated shift management and interviewer evaluations for supervisors, as well as provided technical support. The data generated through the app were fed into the QC database, which were used to monitor predefined QC indicators during the data collection phase of the study.
This study was approved by Qatar University Institutional Review Board. The Research Ethics Approval Number is QU-IRB 1219 EA/20. As the survey was conducted using the telephone mode of interviewing, verbal informed consent to participate was obtained from all study participants. To safeguard against potential risks of participation, standard research protocols and the Health Insurance Portability and Accountability Act were followed. All respondents were assigned anonymous study codes for identification, and only aggregate data were used.
QC Indicator System
The QC database mentioned above was a key component of the QC indicator system (QCIS), which transformed, integrated, and aggregated data into tables that store information on various indicators, allowing updates on the progress of data collection through visualizations used by the QC team to investigate the performance of the interviewers. To holistically comprehend the QC system, we present and discuss in this section the components of QCIS, modifications implemented to adapt the QC process to the phone interviewing mode, and the metrics and indicators, also called flags, that the QC team refers to when investigating, monitoring, and correcting interviewers’ performance and behavior.
Server Infrastructure and Data Components
The data collected by the interviewers and paradata were categorized into different sources that were fed into the QC system. Through different schedules of frequency processing, quality indicators objectively measured the efficiency of the essential segments of data collection. Quality indicators were vital in the QC process because they offered the possibility of rapid insight into the quality of the collected data, the performance of the interviewers, and their patterns related to their performance with time [ 32 ].
QCIS was developed by the SESRI IT team for the WMH Qatar study. The foundation of the QC tool was initially established for the CAPI survey mode in collaboration with the University of Michigan team. Switching to CATI mode involved major changes to the QC tool implemented by the SESRI IT team; these changes are presented below. In developing the QC tool, the SESRI IT team identified variables for QC indicator flags, created scripts to generate the indicator flag, designed a schema to store flag information and desired aggregated data, designed a periodic QC indicator processor to update the flag data in the Master QC, and developed sample charts and tables to use as visualizations of the QC indicators.
Data coming from different sources were loaded into the QC database. Before the pandemic, when the interviewing mode was face-to-face, the QC infrastructure had the following sources: CATI audit trail (ADT), Sample Management System, and Blaise Response Data CIDI. These sources were processed into the QC database, and using the SQL analysis service to transform and the server analysis services tabular model to aggregate, the indicators were reported through charts, tables, and graphs on the Power Business Intelligence (Power BI; Microsoft Corp) dashboard. The structural changes implemented by the SESRI IT team because of the new mode of interviewing involved the addition of 3 new sources: Cisco call detail records (CDR), CATI Time Tracker, and Master Project Repository. Moreover, the server analysis services model was removed, and the data extraction transformation and loading process was integrated within the QC database ( Figure 1 ).
For a better understanding of the raw data fed into the QC database, a brief description of the data sources will provide insights into the information used for developing indicators. The 6 data sources are explained as follows:
- CATI ADT was generated by the Blaise software as interviewers went through the interview. ADT data were parsed daily and included information on Blaise sessions, all activities in survey fields, including start and end timestamps, and keystrokes. This information was used when flagging for survey time–related QC indicators.
- The Sample Management System used a call scheduler, which was responsible for scheduling telephone numbers in the day batch, making the cases available to the interviewers at the scheduled time.
- Blaise response data were also consumed by the QC system, including the start and end timestamps of the call, call outcome, appointment details, case status, respondent type, and gender. The Blaise data were collected in near real time (delays of a few seconds or minutes) and were vital for performing QC and for monitoring the progress of the study.
- CDR data provided a record of all calls that have been made or received by users of the Cisco Call Manager system and were useful for tracking call activity and monitoring sessions. CDR data were collected by SESRI, filtered to keep only outgoing calls from SESRI phone numbers, and transformed to extract details such as call start and end timestamps, duration of the call, correct phone number, and monitoring sessions that occurred during the call, if any.
- The CATI Time Tracker was also an important source that provided details about interviewers’ activities. The Time Tracker helped the data collection agents keep track of how each interviewer’s workday was distributed on different predefined activities such as calling, shift preparation, and technical support. These data were compared with the agent’s Blaise activity to check for underreporting or overreporting of work hours.
- The Master Project Repository was a database that served as the basis for the payroll system. It included data such as phases of studies conducted by SESRI, employee details, agents participating in each study and their roles, past payment details, and history of agent employment.
These data sources underwent preprocessing, including filtering, parsing, and reshaping. This was done daily through special scripts and run using TeamCity (JetBrains) software. After preprocessing, the data were loaded into the QC database, which further transformed, integrated, and aggregated data into tables that stored information on dials, interviewer activities, values for different metrics and indicators, and the status of cases. This process was done using a series of specialized procedures that ran consecutively on SQL Server Management Studio [ 33 ] and initiated by TeamCity every 15 minutes throughout the day. This process allowed for the updated progress of data collection to be seen as visualizations on the Power BI dashboard.
Indicators or Flags in Power BI
The visualizations on the Power BI dashboard were based on indicators that have been attentively selected and implemented to reflect the requirements of the phone interviewing mode. The SESRI IT team and the research team met regularly to discuss the indicators required for the new mode of interviewing.
The Michigan team assisted in the implementation of these modified flags. After reassessing the old indicators for the CAPI mode, a few of the face-to-face mode indicators were kept with new cutoffs, and new indicators were developed by the SESRI IT team and displayed on the Power BI dashboard ( Textbox 1 ). The initial cutoffs were determined by the team at Michigan University and were used in previous WMH face-to-face surveys. Nevertheless, when the interviewing mode of the WMH Survey in Qatar changed to telephone mode owing to COVID-19 social restrictions, the cutoff values of the CAPI mode indicators were no longer applicable. The IT team at SESRI implemented new cutoffs based on the features of telephone interviewing. For example, the shorter length of phone interviews compared with face-to-face interviews lead to different cutoff value for the short average interview length flag. The country’s cultural setting was also considered when determining the cutoff values for some of the flags. For example, prayer time affected the cutoff values used for the long question field time and long interview length flags. The pilot phase of the study provided an important opportunity to further test and adjust these values before the survey production phase.
To facilitate flag identification by each team, the indicators were categorized into CIDI-related indicators, duration-related indicators, IT indicators, and operational indicators. This categorization allowed each team to focus on the flags assigned for review and verification. In addition, the IT team developed new metrics to verify whether the interviewers were adhering to their working hours and to collect monitoring details ( Textbox 2 ). This was of great importance as the interviewers were working from home, and the supervisors were not able to directly monitor the interviewers as they did in the centralized CATI system setup. Quality indicators were one of the tools used to monitor and control process functioning, whereby the data collected provided a basis for the implementation of corrective measures and continuous quality improvement activities.
After discussing with the research team, the IT team, prioritized the information to be displayed on the first page of the Power BI dashboard. This provided a summary of the progress of interviewers that the field operations and the research teams were able to access in a fast way. The overall number of completed interviews per hour and the total duration of all Cisco calls were implemented on the dashboard, as well as a chart of the percentages of long and short completed interviews. Other indicators displayed on the main page were total sample size, sampled cases, total completed interviews, completed interviews per day, total number of interviewers, number of active interviewers per day, total number of dials, number of dials per day, disposition summary, and respondent type and gender ( Figure 2 ).
Other tabs in the Power BI dashboard include visualizations of dispositions, overall performance, Cisco call details, interviewer activity, evaluation, and appointments. Another page containing all the flags by the interviewer provided a holistic perspective on the overall performance of the interviewers and enabled the users to filter the information by date, interviewer, category, and type of flags, including the flags calculated for a period of 2 weeks, called z score flags.
All the mentioned flags and metrics were displayed on the Power BI dashboard via graphs, charts, tables, and other infographics. The Power BI dashboard was customized for the WMH Qatar study, which helped the research team maintain oversight of survey progress and adherence to the protocol. The indicators were automated and updated daily, enabling live updates on productivity and interviewers’ performance in the field.
List of flags
- High number of completes
- Short question field time
- Short stem question field time
- High percentage of short field time
- Short interview length
- Long interview length
- Short average interview length
- High number of negative stems
- Multiple field visits
- Multiple stem-field visits
- Low prevalence rate
- Long question field time
- Long treatment length
Metrics developed by the Social and Economic Survey Research Institute IT team
- Monitoring time per case
- Monitoring time per interviewer
- Blaise survey time
- Blaise adjusted total time
- Blaise treatment time
- Cisco calling time
- Average after call work time
- Dials per hour
- Number of completions per hour
- Adjusted number of completions per hour
- Discrepancy in reported working time
- Audit trail section length
- Blaise timeout bug flag
QC Operational Team and Procedures
For the WMH Qatar Survey, QC procedures included automated procedures applied to the collected data through the QC tool, manual verification and investigation of the indicators, prioritizing and filtering the most concerning flags, monitoring the performance of the interviewers, evaluating performance and progress, and applying corrective and preventive measures to improve interviewers’ performance. The composition of the QC team and the details of each QC procedure are presented as follows.
Organization of QC Team
The QC research team was formed of the QC supervisor who oversaw the QC process from the research side and investigated the flags, another flag investigator from the IT team, and 4 phone monitors in charge of evaluating the interviewers during the interviews. Moreover, 5 supervisors from the CATI laboratory monitored and evaluated the interviewers.
Verification and Flags Investigation Process
The process of reviewing the flags served 4 main objectives: (1) obtaining information about the interview (eg, length, attempts, revisiting questions, and duration of the questions), (2) providing information about the overall data quality of the survey (eg, low prevalence and a high number of negative stem questions), (3) detecting and preventing data falsification, and (4) improving interviewer performance by reinforcing the requirements to fall into specific parameters.
Several steps pertained to the verification and investigation processes ( Figure 3 ). The QC system generated flags daily; these flags were loaded on the Power BI dashboard and as tasks on the Microsoft Planner (a program that is used to document the notes and actions taken for each flag). The team started investigating the flags by exporting from the dashboard the table of flags triggered within a specific interval (usually every 3 days) by each interviewer. The initial step was to determine whether the flag was valid or not and if there were technical issues. In the presence of any technical issue (eg, discrepancies, erroneous values—999, and duplicated z scores), a task was created for the IT team to check and resolve the flag issue. If there were no technical issues identified, the flags were compiled in an Excel (Microsoft Corporation) sheet with details, notes, and comments and presented to the entire QC research team during biweekly meetings. It was crucial to prioritize and filter the flags according to their level of gravity. The flag investigator determined, based on the findings of the examination, whether a flag was concerning or not. If a flag was concerning, it was reassigned to the monitoring team in the Planner for further investigation. In addition to assigning flags to the Planner, the QC supervisor sent an email at the end of each biweekly meeting with a list of interviewers who required further monitoring and evaluation of their skills. Then, the monitoring team observed the interviewers, added notes and actions in the Planner, and reassigned the task to the QC supervisor to review the added notes. If the explanations and actions were satisfactory, the task was recorded as completed, and the dashboard was updated with the task status.
Monitoring the Interviewers
Monitoring allowed us to investigate further suspicious patterns of interviewers’ behaviors and aided us in understanding why certain flags were triggered. The QC staff monitored the progress of the survey activities. Most WMH Surveys recommended 1 monitor for every 8 to 10 experienced interviewers [ 34 ]. The monitoring team identified where mistakes occurred during the interview and under what circumstances these mistakes took place and provided insight into respondents’ behaviors as well [ 35 ].
The interviews were monitored live (not recorded) by members of the research team and CATI laboratory supervisors using the Cisco Finesse software (Cisco Systems). The monitors looked for specific anomalies according to the concerning flags assigned by the QC supervisor in the Planner. For instance, if an interviewer was flagged for having a short question field time, the phone monitor observed the reading speed and verified whether this particular interviewer was reading the questions verbatim as per the protocol or not.
The entire duration of all dials made by the interviewers was 9767 hours. We defined dial duration as a telephone connection that starts when the respondent answers the incoming phone call. The monitoring duration accounted for 6.01% (586/9767) of the total duration of dial time. Looking at the completed cases only, 12.08% (543/4496) of them were monitored for at least 15 minutes. Approximately 21% (944/4496) of the completed cases were monitored for at least several minutes (including the cases monitored for <15 min and >15 min).
The monitoring team evaluated the interviewers by scoring their skills on the Power BI dashboard and CATI Time Tracker. The following skills were evaluated: probing techniques, dealing with objections and refusals, ability to persuade participants to participate in the survey, typing speed, project knowledge and the ability to answer questions and concerns, and reading the question verbatim without extra explanation or personal opinions. The interviewers’ working hours and the number of completed interviews were also verified. Each skill was scored on a scale from 0 (lowest) to 5 (highest). Monthly and biweekly vouchers were offered to interviewers who adhered to the study protocols, triggered fewer flags, and showed progress after receiving feedback.
Monitoring notes revealed that most of the flagged interviewers were reading the questionnaire in a rapid manner, skipping words, pronouncing words unclearly, and providing interpretations. Such behaviors required both preventive and corrective measures to reinforce the importance of interviewing protocols and guidelines. The preventive actions included refresher sessions, observing the interviewer’s performance for anomalies, reminding the interviewers to follow fieldwork protocols, and using the interviewer booklet for clarifications. Corrections were made by supervisors and phone monitors by contacting the interviewers to provide feedback and instructions. In case of lack of improvement, further action was taken by the CATI laboratory manager by sending a final warning to the interviewers who failed to show any progress.
From the beginning of fielding until the moment of writing this paper, the QC team investigated the flags, focusing on the low prevalence rate, negative stems, number of completed interviews per day, and duration of the interview and questions. The team investigated 11,035 flags triggered during the 2 waves of survey production.
Table 1 lists the number of flags triggered by category. The highest number of flags triggered among all the indicator categories is for the multiple stem-field visits flag (4584/11035, 41.54%). A high percentage of multiple visits by interviewers was observed for the stem questions in the stressful experiences module (2751/4584, 60.03%), depression module (1243/4584, 27.12%), and worry and anxiety module (277/4584, 6.05%). With a cutoff of 1 visit per stem question, most stem questions were visited 2 times (3300/4584, 72%) and 3 times (1019/4584, 22.24%). The highest number of stem-field visits was 13 for a question in the stressful experiences module.
Out of the total number of 1926 flags of multiple field visits for nonstem questions ( Table 1 ), the highest percentages per module were 20.06% (386/1926) in the stressful experiences module, 20.32% (391/1926) in the health module, and 10.4% (200/1926) in the Covid-19 section. With a cutoff of 3 visits per nonstem question, most of the nonstem questions were visited 4 times (1213/1926, 63.02%) and 5 times (484/1926, 25.13%). The highest number of visits were 24 and 34 for the same question about confirming age and residency in the phone introduction section.
The QC team investigated multiple field visits for stem and nonstem questions, focusing on whether the interviewers were changing to negative answers to minimize the duration of the interview. In most cases, the answers were not modified, and when interviewers were monitored or contacted to provide explanations, it was found that respondents wanted to revisit the questions because they did not understand them. Other motives, such as call interruptions, rapid entry of the option before respondents finalized answering, and respondent’s request to go back to a module at the beginning of the survey, explained revisiting questions. Another motive was the sensitive nature of some questions in these modules, which may have prompted the readdressing of questions. This finding also explains why the module with the most multiple field visits (stem and nonstem) is the stressful experiences module, which contained questions about traumatic situations, including sexual harassment and rape.
The short question field time was the second most triggered flag (3224/11035, 29.21%). The research team tested the questionnaire and verified which questions were <3 seconds by default and eliminated those from the questions to be flagged. Upon further monitoring, the interviewers who read the questions in <3 seconds and were flagged, we found that they were reading the questions in a rapid manner, skipping words, selecting the option before the respondent gave a final answer, and even not reading verbatim but using their own words. Corrective actions for such behavior included reinforcing instructions through SMS text messages or phone calls by a member of the monitoring team. An example of a corrective message to the interviewer was as follows: read the questions clearly at a medium speed (especially the stem questions), read the full answer options if applicable, and record the answer after the respondent’s response, not while still talking. Some of the preventive actions included the supervisor and the monitoring team keeping an eye on a particular interviewer’s performance for anomalies, as well as reminding the interviewers to follow the necessary fieldwork protocols.
The prevalence flag indicated whether the cases had disorders or not. The prevalence flag accounted for both positive and negative values calculated for a period of 2 weeks and a cutoff of >+1.5 z score for positive prevalence and <−1.5 Z score for negative prevalence. The average value was 0.43 (SD 0.11). The highest negative prevalence flag was −2.64 (SD 0.16), whereas the highest positive prevalence flag was 2.47 (SD 0.17). In total, only 21 flags were triggered for the negative prevalence indicator, where the cases included in the calculations had no disorders, and 26 flags were triggered for the positive prevalence flag, where cases had many disorders. When investigated and correlated with other flags, some interviewers with a high negative prevalence also had short interviews and short questions. These interviewers were further investigated and monitored to determine if they were developing certain patterns when interviewing, and corrective actions were taken. Interviewers with high positive prevalence flags exhibited greater skills in convincing the respondents to disclose their mental health disorders.
Most of the short and long interviews were just a few minutes shorter than the cutoff values. The interviewers who cumulated a high number of short interview length and long interview length flags were monitored. It was found that, in the case of short interviews, the interviewers were reading at a fast pace, but clearly, while those with long interviews were either having technical issues and interruptions, respondents who provided detailed answers or comments, interviewers reading the questionnaire slowly, or breaks requested by the respondent. In the case of a break for >7 minutes, the case was flagged with long question field time , which was investigated by tracing the duration of the reported working time, Cisco calling time, ADT time, and Blaise time. In addition, we verified the question where the interviewer was at during the long pause and whether the interviewer left any remarks for that respective question. Some interviewers left notes saying that the respondent took a short break to pray (as Muslims pray 5 times each day) or attend to domestic tasks. In cases where no explanation was provided for a break, the monitoring team observed the interviewers and took corrective actions.
Some of the cases flagged with high number of completed interviews and with high percentage of short field time (>30% of short questions within 1 interview) were found to have the short question field time and short interview length flags as well. However, in many instances, the interviewers flagged with a high number of completed cases per day were completing cases at the second or third attempt. This finding meant that previous interviewing sessions were conducted before for the same cases that completed the modules of the survey.
In the case of short average interview length flag, the average interview length was calculated for each interview every 2 weeks. If the z score for this type of flag was ≤−2, then the interviewer was flagged. We only had 14 short average interview length flags. This finding was largely because of the long nature of the WMH Survey, in which the average interview length was 94 minutes for the long version and 64 minutes for the short version.
On the basis of the investigation of triggered flags per interviewer, monitoring, corrective and preventive actions were taken. Although, in most cases, interviewers required refresher training sessions and feedback to improve their performance, several interviewers discontinued work owing to low productivity and a high number of triggered flags.
Limitations and Challenges
Developing new flags involved determining appropriate cutoffs adapted for the telephone interviewing mode. This was challenging because the previous phone interviews conducted at SESRI were not based on long and complex questionnaires, as was the case for the WMH Qatar Survey. In this regard, some flags required additional calculations to reflect various factors affecting the predetermined values. For instance, after 3 weeks into production, we discovered that many high number of completes flags were triggered. After an in-depth investigation and based on the collected data, we found that some interviewers were working extra hours, completing more interviews, and being flagged without taking these factors into consideration. After accounting for extra hours, the number of these flags decreased from 85 to 18 (at the moment of discovery and implementation of the new calculation).
Microsoft Planner posed some difficulties at the start of the survey production phase in relation to documenting the flag investigation process, monitoring observations, and assigning corrective interventions. We initially allocated each flag a task on the Planner. This led to hundreds of tasks generated on a daily basis in the Planner, making it logistically impossible to review and prioritize the flags triggered. We decided to organize the tasks per interviewer instead and to generate weekly frequency for the flags triggered most often (eg, short question field time, multiple field visits, and multiple stem-field visits) rather than daily frequency.
To mitigate the challenges faced during the design, implementation, investigation, and verification phases, the QC team proposed innovative solutions and experimental stages to address various factors affecting the survey QC process. Being one of the first studies in the WMH Survey consortium conducted using CATI mode during the COVID-19 pandemic, no prior evidence was available on how to incorporate QC procedures within these complex surveys. Available, but limited, literature from other parts of the world in relation to using CATI surveys [ 23 ] and web surveys [ 24 - 28 ] did not focus on QC procedures. This study aimed to provide an overall framework for implementing similar QC procedures in future large-scale population surveys, including WMH Surveys using CATI technology. Nevertheless, further research and experimentation are necessary for developing sound QC process adapted for large-scale telephone survey interviews.
QC procedures were essential in conducting the WMH Qatar phone survey in the context of the COVID-19 pandemic. The IT team, the CATI laboratory team, and the research team collaborated to develop strategies to conduct and manage the QC procedures. Switching from the CAPI mode to CATI mode required a complete transformation of the previously established QC system. Initially, the CATI laboratory was based on a centralized call center, and because of the COVID-19 outbreak, the operations had to be switched to a decentralized call center. Various programs were introduced, such as CATI Time Tracker, developed by the SESRI IT team, to accommodate the requirements of phone surveying. Microsoft Planner was also used to provide justifications and document the actions taken by the research team and the CATI laboratory team.
Developing new indicators customized for the phone mode of surveying enhanced the QC process. The flag investigation and verification of the interviewers’ performance shed light on how the interviewers conducted the telephone interviews and the QC standards followed during the data collection phase of the WMH Qatar Survey.
Feasible methods of QC are a high priority for institutions and individuals conducting studies that involve data collection of sensitive and costly population-based surveys. QC measures may present a challenge for remote monitoring of interviewers’ performance. Our efforts to design and implement a survey QC system attempted to solve some of the challenges related to the phone mode of interviewing, especially when conducted remotely for large-scale and costly surveys. We also attempted to build a prototype of the QC system for phone interviewing that can be further used in our local research community. Our findings will help shape future QC procedures in similar studies in which a probability sample is designed under conditions of necessity. This is relevant for survey practice today more than ever because of the increasing demand to collect more quality data at a lower cost and its implications for health policies.
The authors would like to express their great appreciation to the Social and Economic Survey Research Institute IT team led by Anis Miladi for developing the quality control indicator system. We would also like to thank Sarah Broumand, Jennifer Kelley, Gina-Qian Cheung, and Zeina Mneimneh from the University of Michigan’s Survey Research Center for their support in establishing, implementing, and verifying the quality control indicator system for the World Mental Health Qatar Study.
This study was supported by the Hamad Medical Corporation through the Cambridge and Peterborough National Institutes of Health Fund (grant QUEX‐ESC‐CPFT‐18/19).
Open access funding provided by the Qatar National Library.
Data sharing is not applicable to this paper because no data sets were generated or analyzed in this case study.
All the authors were involved in developing and implementing quality control procedures in the World Mental Health Qatar Study. CP conceived, drafted, edited, and reviewed the manuscript. IB, HS, MA-A, and VD contributed to writing this paper. AM contributed to the writing and reviewed the technical details. SMK contributed to the conception, editing, and review of the manuscript. All authors read and approved the final manuscript.
Conflicts of Interest
- Üstün TB, Chatterji S, Mechbal A, Murray CJ. Quality assurance in surveys: standards, guidelines and procedures. In: Household Sample Surveys in Developing and Transition Countries. New York, NY. Department of Economic and Social Affairs Statistics Division, United Nations; 2005:199-228
- Hansen SE, Benson G, Bowers A, Pennell BE, Lin Y, Duffey B. Guidelines for best practice in cross-cultural surveys. Cross - Cultural Survey Guideliness. 2016. URL: http://www.ccsg.isr.umich.edu/images/PDFs/CCSG_ [accessed 2023-11-12]
- Boing AC, Peres KG, Boing AF, Hallal PC, Silva NN, Peres MN. EpiFloripa Health Survey: the methodological and operational aspects behind the scenes. Rev Bras Epidemiol 2014 Jan;17(1):147-162 [ https://www.scielo.br/scielo.php?script=sci_arttext&pid=S1415-790X2014000100147&lng=en&nrm=iso&tlng=en ] [ CrossRef ] [ Medline ]
- Kessler RC, Berglund P, Chiu WT, Demler O, Heeringa S, Hiripi E, et al. The US National Comorbidity Survey Replication (NCS-R): design and field procedures. Int J Methods Psychiatr Res 2004;13(2):69-92 [ http://hdl.handle.net/2027.42/34222 ] [ CrossRef ] [ Medline ]
- Viana MC, Teixeira MG, Beraldi F, de Santana Bassani I, Andrade LH. São Paulo Megacity Mental Health Survey - a population-based epidemiological study of psychiatric morbidity in the São Paulo metropolitan area: aims, design and field implementation. Rev Bras Psiquiatr 2009 Dec;31(4):375-386 [ https://psycnet.apa.org/record/2011-08110-015 ] [ CrossRef ]
- Xavier M, Baptista H, Mendes JM, Magalhães P, Caldas-de-Almeida JM. Implementing the World Mental Health Survey Initiative in Portugal - rationale, design and fieldwork procedures. Int J Ment Health Syst 2013 Jul 09;7(1):19 [ https://ijmhs.biomedcentral.com/articles/10.1186/1752-4458-7-19 ] [ CrossRef ] [ Medline ]
- Bredl S, Winker P, Kötschau K. A statistical approach to detect interviewer falsification of survey data. Survey Methodol 2012;38(1):1-10 [ https://www150.statcan.gc.ca/n1/pub/12-001-x/2012001/article/11680-eng.pdf ]
- Al-Habeeb A, Altwaijri Y, Al-Subaie AS, Bilal L, Almeharish A, Sampson N, et al. Twelve-month treatment of mental disorders in the Saudi National Mental Health Survey. Int J Methods Psychiatr Res 2020 Sep;29(3):e1832 [ https://europepmc.org/abstract/MED/32519421 ] [ CrossRef ] [ Medline ]
- Altwaijri YA, Al-Habeeb A, Bilal L, Shahab MK, Pennell BE, Mneimneh Z, et al. The Saudi National Mental Health Survey: survey instrument and field procedures. Int J Methods Psychiatr Res 2020 Sep;29(3):e1830 [ https://europepmc.org/abstract/MED/33245571 ] [ CrossRef ] [ Medline ]
- Aradati M, Bilal L, Naseem MT, Hyder S, Al-Habeeb A, Al-Subaie A, et al. Using knowledge management tools in the Saudi National Mental Health Survey helpdesk: pre and post study. Int J Ment Health Syst 2019 May 10;13:33 [ https://ijmhs.biomedcentral.com/articles/10.1186/s13033-019-0288-5 ] [ CrossRef ] [ Medline ]
- Harris MG, Kazdin AE, Chiu WT, Sampson NA, Aguilar-Gaxiola S, Al-Hamzawi A, et al. WHO World Mental Health Survey Collaborators. Findings from world mental health surveys of the perceived helpfulness of treatment for patients with major depressive disorder. JAMA Psychiatry 2020 Aug 01;77(8):830-841 [ https://europepmc.org/abstract/MED/32432716 ] [ CrossRef ] [ Medline ]
- Home page. Harvard Medical School. 2021. URL: https://hcp.hms.harvard.edu/ [accessed 2023-11-12]
- Karam EG, Itani LA. Mental health research in the Arab world: an update. BJPsych Int 2018 Jan 02;12(S1):S-25-S-28 [ https://www.cambridge.org/core/product/identifier/S2056474000000829/type/journal_article ] [ CrossRef ]
- Shahab M, Al-Tuwaijri F, Bilal L, Hyder S, Al-Habeeb AA, Al-Subaie A, et al. The Saudi National Mental Health Survey: methodological and logistical challenges from the pilot study. Int J Methods Psychiatr Res 2017 Sep;26(3):e1565 [ https://europepmc.org/abstract/MED/28497533 ] [ CrossRef ] [ Medline ]
- Kessler R, Ustün TB. The World Mental Health (WMH) survey initiative version of the World Health Organization (WHO) Composite International Diagnostic Interview (CIDI). Int J Methods Psychiatr Res 2004;13(2):93-121 [ https://europepmc.org/abstract/MED/15297906 ] [ CrossRef ] [ Medline ]
- Nishi D, Imamura K, Watanabe K, Ishikawa H, Tachimori H, Takeshima T. Psychological distress with and without a history of depression: results from the World Mental Health Japan 2nd Survey (WMHJ2). J Affect Disord Internet 2020 Mar 15;265:545-551 [ https://linkinghub.elsevier.com/retrieve/pii/S0165032719319822 ] [ CrossRef ]
- Rahimi-Movaghar A, Amin-Esmaeili M, Sharifi V, Hajebi A, Radgoodarzi R, Hefazi M, et al. Iranian mental health survey: design and field proced. Iran J Psychiatry 2014 Apr;9(2):96-109 [ https://europepmc.org/abstract/MED/25632287 ] [ Medline ]
- Mudryk W, Burgess MJ, Xiao P. 1996: quality control of cati operations in statistics Canada. Statistics Canada. 1996. URL: http://www.asasrms.org/Proceedings/papers/1996_020.pdf [accessed 2023-11-12]
- Subramaniam M, Vaingankar J, Heng D, Kwok KW, Lim YW, Yap M, et al. The Singapore Mental Health Study: an overview of the methodology. Int J Methods Psychiatr Res 2012 Jun 01;21(2):149-157 [ https://europepmc.org/abstract/MED/22331628 ] [ CrossRef ] [ Medline ]
- Szklo M, Nieto FJ. Epidemiology: Beyond the Basics. Sudbury, MA. Jones and Bartlett Publishers; 2014.
- Hyder S, Bilal L, Akkad L, Lin Y, Al-Habeeb A, Al-Subaie A, et al. Evidence-based guideline implementation of quality assurance and quality control procedures in the Saudi National Mental Health Survey. Int J Ment Health Syst 2017;11:60 [ https://ijmhs.biomedcentral.com/articles/10.1186/s13033-017-0164-0 ] [ CrossRef ] [ Medline ]
- Khaled S, Petcu C, Bader L, Amro I, Al-Assi M, Le Trung K, et al. Conducting a state-of-the-art mental health survey in a traditional setting: challenges and lessons from piloting the World Mental Health Survey in Qatar. Int J Methods Psychiatr Res 2021 Sep;30(3):e1885 [ https://europepmc.org/abstract/MED/34224172 ] [ CrossRef ] [ Medline ]
- Mortier P, Vilagut G, Ferrer M, Alayo I, Bruffaerts R, Cristóbal-Narváez P, et al. MINDCOVID Working group. Thirty-day suicidal thoughts and behaviours in the Spanish adult general population during the first wave of the Spain COVID-19 pandemic. Epidemiol Psychiatr Sci 2021 Feb 17;30:e19 [ https://europepmc.org/abstract/MED/34187614 ] [ CrossRef ] [ Medline ]
- Alonso J, Vilagut G, Mortier P, Ferrer M, Alayo I, Aragón-Peña A, et al. MINDCOVID Working group. Mental health impact of the first wave of COVID-19 pandemic on Spanish healthcare workers: a large cross-sectional survey. Rev Psiquiatr Salud Ment (Engl Ed) 2021 Apr;14(2):90-105 [ https://linkinghub.elsevier.com/retrieve/pii/S1888-9891(20)30128-2 ] [ CrossRef ] [ Medline ]
- Bantjes J, Kazdin A, Cuijpers P, Breet E, Dunn-Coetzee M, Davids C, et al. A web-based group cognitive behavioral therapy intervention for symptoms of anxiety and depression among university students: open-label, pragmatic trial. JMIR Ment Health 2021 May 27;8(5):e27400 [ https://mental.jmir.org/2021/5/e27400/ ] [ CrossRef ] [ Medline ]
- Berman AH, Bendtsen M, Molander O, Lindfors P, Lindner P, Granlund L, et al. Compliance with recommendations limiting COVID-19 contagion among university students in Sweden: associations with self-reported symptoms, mental health and academic self-efficacy. Scand J Public Health 2022 Feb;50(1):70-84 [ https://journals.sagepub.com/doi/abs/10.1177/14034948211027824?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub 0pubmed ] [ CrossRef ] [ Medline ]
- Bruffaerts R, Voorspoels W, Jansen L, Kessler RC, Mortier P, Vilagut G, et al. Suicidality among healthcare professionals during the first COVID19 wave. J Affect Disord 2021 Mar 15;283:66-70 [ https://europepmc.org/abstract/MED/33524660 ] [ CrossRef ] [ Medline ]
- Ward C, McLafferty M, McLaughlin J, McHugh R, McBride L, Brady J, et al. Suicidal behaviours and mental health disorders among students commencing college. Psychiatry Res 2022 Jan;307:114314 [ CrossRef ] [ Medline ]
- Lavrakas PJ. Encyclopedia of Survey Research Methods. Thousand Oaks, CA. Sage Publications; 2008.
- Costa M. Computer-assisted telephone interviewing (CATI) starter kit. Survey CTO Support Centre. 2020. URL: https://support.surveycto.com/hc/en-us/articles/360044958494-Computer-assisted-telephone-interviewing-CATI-starter-kit [accessed 2023-11-12]
- Cisco umbrella. Palo Alto Networks. URL: https://www.paloaltonetworks.com/products/globalprotect [accessed 2021-12-12]
- Vuk T. Quality indicators: a tool for quality monitoring and improvement: quality indicators. ISBT Sci Ser 2012;7(1):24-28 [ https://onlinelibrary.wiley.com/doi/10.1111/j.1751-2824.2012.01584.x ]
- Download SQL server management studio (SSMS). Microsoft Ignite. 2023 Oct 26. URL: https://docs.microsoft.com/en-us/sql/ssms/.Accessed [accessed 2023-11-12]
- Groves RM. Survey Errors and Survey Costs. Hoboken, NJ. John Wiley & Sons; 1989.
- Biemer PP, Lyberg LE. ntroduction to Survey Quality. Hoboken, NJ. John Wiley & Sons; 2003.
Edited by A Mavragani; submitted 02.03.22; peer-reviewed by S Afzal, C Weerth, C Nwoke; comments to author 24.01.23; revised version received 14.02.23; accepted 14.02.23; published 27.11.23
©Catalina Petcu, Ikram Boukhelif, Veena Davis, Hamda Shamsi, Marwa Al-Assi, Anis Miladi, Salma M Khaled. Originally published in JMIR Formative Research (https://formative.jmir.org), 27.11.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
This paper is in the following e-collection/theme issue:
Published on 27.11.2023 in Vol 25 (2023)
Co-design of a Mobile App for Engaging Breast Cancer Patients in Reporting Health Experiences: Qualitative Case Study
Authors of this article:
- Carla Taramasco 1, 2 , PhD ;
- Carla Rimassa 1, 3 , PhD ;
- René Noël 4 , MSc ;
- María Loreto Bravo Storm 5 , PhD ;
- César Sánchez 5 , MD
1 Instituto de Tecnologías para la Innovación en Salud y Bienestar, Facultad de Ingeniería, Universidad Andrés Bello, Viña del Mar, Chile
2 Centro para la Prevención y el Control del Cáncer, Santiago, Chile
3 Facultad de Medicina, Escuela de Fonoaudiología, Campus San Felipe, Universidad de Valparaíso, San Felipe, Chile
4 Escuela de Ingeniería Informática, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso, Chile
5 Departamento de Hematología y Oncología, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
Carla Taramasco, PhD
Instituto de Tecnologías para la Innovación en Salud y Bienestar, Facultad de Ingeniería
Universidad Andrés Bello
Viña del Mar, 2531015
Phone: 56 322507000
Email: [email protected]
Background: The World Health Organization recommends incorporating patient-reported experience measures and patient-reported outcome measures to ensure care processes. New technologies, such as mobile apps, could help report and monitor patients’ adverse effects and doubts during treatment. However, engaging patients in the daily use of mobile apps is a challenge that must be addressed in accordance with the needs of people.
Objective: We present a qualitative case study documenting the process of identifying the information needs of breast cancer patients and health care professionals during the treatment process in a Chilean cancer institution. The study aims to identify patients’ information requirements for integration into a mobile app that accompanies patients throughout their treatment while also providing features for reporting adverse symptoms.
Methods: We conducted focus groups with breast cancer patients who were undergoing chemotherapy (n=3) or who completed chemotherapy between 3 months and 1 year (n=1). We also surveyed health care professionals (n=9) who were involved in patient care and who belonged to the oncology committee of the cancer center where the study took place. Content analysis was applied to the responses to categorize the information needs and the means to satisfy them. A user interface was designed according to the findings of the focus groups and was assessed by 3 trained information system and user interaction design experts from 2 countries, using heuristic evaluation guidelines for mobile apps.
Results: Patients’ information needs were classified into 4 areas: an overview of the disease, information on treatment and day-to-day affairs, assistance on the normality and abnormality of symptoms during treatment, and symptoms relevant to report. Health care professionals required patients to be provided with information on the administrative and financial process. We noted that the active involvement of the following 4 main actors is required to satisfy the information needs: patients, caregivers, social network moderators, and health professionals. Seven usability guidelines were extracted from the heuristic evaluation recommendations.
Conclusions: A mobile app that seeks to accompany breast cancer patients to report symptoms requires the involvement of multiple participants to handle the reports and day-to-day information needs. User interfaces must be designed with consideration of the patient’s social conventions and the emotional load of the disease information.
Cancer is a disease characterized by the accelerated multiplication of abnormal cells that are able to spread to different organs (metastasis), which is the main cause of death worldwide, with almost 10 million deaths in 2020, and breast, lung, colorectal, prostate, dermal, and gastric cancers are the most common [ 1 ]. The situation in Latin America is worrying because survival after 5 years of diagnosis is lower than that in Organization for Economic Co-operation and Development (OECD) countries [ 2 ]. In Chile, as in other countries of the world, there has been an increase in morbidity and mortality from acute and chronic noncommunicable diseases (NCDs). Specifically, at the national level between 2009 and 2019, NCDs were the main cause of death, with cancer ranked first and cardiovascular diseases ranked second in 2019, and Chile has been placed second in South America [ 3 ]. In addition, although the disease can affect people throughout their life cycle, data from the beginning of the second millennium have shown that the number of new cases among both sexes increases with advancing age [ 4 ].
In Chile in recent years, certain milestones have been achieved that are together aimed at improving the detection, care, and monitoring of people with cancer, among which are the National Cancer Plan of 2018 [ 4 ], the National Cancer Law (Law 21 258) [ 5 ], and the National Cancer Registry (NCR) [ 6 ]. In the National Cancer Plan, 5 strategic areas are proposed, 3 of which are considered transversal and fundamental, including the strengthening of registration, information, and surveillance systems [ 7 ]. The Cancer Law mentions mandatory notification of the disease [ 5 ], and the NCR provides health authorities with a national information system that continuously and systematically collects, stores, processes, and analyzes data on all cases and types of cancers that occur in the country, which includes public and private health patients, and more than 20 establishments, with over 5000 cases of cancer recorded [ 2 ].
The NCR is a technological tool that helps monitor cancer trends over time, guides the planning and evaluation of cancer control programs, shows cancer patterns in different populations, and identifies high-risk groups, enabling decisions to be made with specific needs in mind, and information from the NCR contributes to prioritizing resource allocation and promoting research activities in specific areas [ 3 ]. However, the World Health Organization now recommends that to guarantee health care processes, it is important to incorporate patient-reported experience measures (PREMs) and patient-reported outcome measures (PROMs) [ 8 ]. This health entity mentions that quality assurance and improvement are important components of the development and sustainability of services that must consider cultural characteristics. Thus, outcome measures reported by patients and health professionals, and measures of experience provide valuable data on the person’s centrality and effectiveness regarding the services provided, presenting information on a person’s self-perception of their health, which may include quality of life, functioning, and self-efficacy, or revealing a person’s perception of their experience of a service of health or social care, which may include experience in terms of access, waiting times, and the possibility of participating in shared decision-making [ 8 ]. Therefore, it is necessary to incorporate information directly from patients into the NCR.
Mobile apps for collecting and managing adverse symptoms have been designed [ 9 , 10 ], and experimental evidence confirms their positive effects among breast cancer patients, leading to significantly less symptom prevalence and symptom burden [ 11 ]. However, engaging patients in the daily use of apps for reporting health experiences is challenging [ 12 ]. One strategy to introduce a tool for reporting adverse symptoms into the daily life of breast cancer patients and simultaneously influencing their quality of life is to leverage positive experiences with accompanying applications [ 13 , 14 ]. While there is evidence that functionalities, such as discussion and learning forums, are highly used by patients, adherence to use depends on various factors, requiring a tailored design [ 15 ].
In this study, we aimed to identify the information needs of patients and health care professionals in order to integrate them into a mobile app that accompanies patients throughout their treatment and, at the same time, offers functions for reporting adverse symptoms. The following subsections present evidence on the benefits of reporting adverse symptoms through PREM and PROM surveys, and existing technological tools to support this process.
Use of PREMs and PROMs in Cancer
The clinical follow-up of cancer patients is an interdisciplinary activity that aims to control side effects and detect early possible relapse, which varies depending on the type of cancer and characteristics of the person [ 4 ]. In Chile, follow-up is part of the treatment of cancer patients and is carried out through secondary and tertiary care [ 16 ] to monitor possible complications of the disease (metastasis, thrombosis, dysphagia, etc) and treatment (myopathies, neuropathies, etc) [ 17 ]. However, such information, which focuses on estimating the incidence and type of cancer, is not collected in the NCR or used in nationwide statistics. The NCR collects information from 4 population-based cancer registries in the country [ 18 ]. In this regard, the OECD’s recommendations indicate that Chile should develop more systematic monitoring for cancer control: (1) extending the registry to more regions; (2) expanding the information collected from screening and diagnosis, similar to childhood and cervical cancer where data are linked to public and private sector providers; and (3) using PREMs and PROMs to improve the quality of cancer care and overall care [ 19 ].
PROM surveys are standardized and validated surveys that measure the results reported by patients during the perioperative period to know the perceptions of health status, level of perceived deterioration, extent of disability, and level of health-related quality of life, and can be classified as generic or specific to a disease [ 20 ]. For example, the QLQ-C30 questionnaire is generic for cancer, is available in more than 100 languages, including Spanish [ 21 ], and is the most widely used questionnaire [ 22 ]. On the other hand, the QLQ-PAN26 questionnaire [ 23 ] is specific for pancreatic cancer. These 2 questionnaires can be used together. The use of PROMs in clinical practice is associated with (1) reduction of emergency care; (2) improved doctor-patient communication, quality of care, quality of life, and experience with providers; and (3) better survival compared with usual care in patients with metastases, who are undergoing chemotherapy [ 24 ]. Other studies suggest that routine PROM collection may improve quality of life and outcomes for pelvic cancer patients [ 25 ], identify undetected symptoms [ 26 ], and aid in the clinical management and intervention of adverse effects [ 27 ].
However, the quality of life of people with cancer is influenced by not only the complications of the disease, but also the consequences of treatment involving a high cost [ 28 ]; therefore, it is important to include the measurement of financial toxicity in PROMs, an example being the COST-FACIT survey [ 29 ], which has 12 questions divided by themes (affect, coping, family, financial, and resources) [ 30 ].
In general, the use of standardized surveys allows comparative studies to be carried out; however, it is also possible to create surveys that adapt to the local reality. For this, it is recommended to follow the ISOQOL PROM measurement standard [ 31 , 32 ]: (1) Conceptual model: description and framework; (2) Confidence: degree to which the measurement of the patient-reported outcome (PRO) is error-free; (3) Validation: degree to which the instrument measures the PRO concept it intends to measure; (4) Interpretability: ease of understanding the meaning of the score of a PRO measure; (5) Minimum important difference: minimum score difference that patients or guardians perceive as important, beneficial, or harmful; and (6) Load: time, effort, and other demands on those who use the instrument or those who administer the instrument (investigator or administrative).
The collection of PROMs through digital means is known as ePROM. Authors have pointed out [ 33 ] that these have greater acceptance and preference by patients, lower costs, similar or faster completion times, and better data quality and response rates, and that patient management of symptoms is more appropriate. The disadvantages identified are related to privacy, large initial financial investment, and the digital divide in the case of some people.
With regard to PREM surveys, which are surveys of patients to collect information regarding lived experiences during care, the analysis of the impact of the care process [ 20 ], such as waiting times and doctor-patient communication, can be classified as (1) relational: regarding the relationship with those who provide care (an example of a questionnaire is CARE) [ 34 ] and (2) functional: with respect to practical situations such as availability of care [ 20 ].
The National Cancer Program of the United Kingdom’s National Health Service developed a PREM survey to monitor progress in cancer care, drive quality improvements, support cancer care commissioners and providers, and inform the work of the various charities and stakeholders supporting cancer patients [ 35 ]. In Chile, the PROM QLQ-ELD 14 (Spanish version), which measures the quality of life of older adults with cancer, has been validated, and it was concluded that the instrument in the applied population presented psychometric properties suitable for survivors of breast, colorectal, gastric, hematologic, lung, gynecological, head and neck, prostate, skin, and other cancers and found that gynecologic cancer survivors have the worst mobility [ 36 ]. Other investigators [ 37 ] used the QLQ-C30 and QLQ-STO22 questionnaires for stomach cancer and concluded that a significant proportion of patients showed an improvement in global health and perception of pain, despite the worsening of some symptoms that could be related to therapy, indicating that research is required on a large scale to confirm the observation.
The advantages of data collection using PROMs and PREMs justify the need for their continuous collection to increase the impact of an NCR. However, challenges need to be addressed regarding the need to collect indicators at a time when patients are comfortable (ideally at home [ 20 ]), and to design user-centered tools that enable user engagement to overcome digital divides [ 33 ].
Technological Tools for the Collection of PROMs and PREMs
In the market, there are PROM and PREM collection tools that can be divided into generic and specific applications.
Form Builder applications are applications that allow the creation and distribution of generic surveys that, due to their functionality, can be used for PROMs and PREMs, namely Teamscope [ 38 ], Survey CTO [ 39 ], Beaver [ 40 ], KoBo Toolbox [ 41 ], REDCap [ 42 ], and ODK [ 43 ].
ePROM and ePREM applications are tools that have standardized surveys. Pro-CTCAE Patient Symptom Reporter has a web application for the registration of adverse events according to the CTCAE (Common Terminology Criteria for Adverse Events) [ 44 ]. Buddy Care Platform allows automatic sending of surveys and reminders, and incorporates instructions and educational material for patients [ 45 ]; however, it is only available for Germany, Finland, and the United States. Patient IQ [ 46 ] captures PROMs, identifies predictors of clinical outcomes, and improves the patient experience; however, it is available only in the United States. My Clinical Outcomes is a web application that allows regular collection of information on diagnosis, treatment, and any clinical condition [ 47 ]. Philips Quest Link integrates with external medical records and uses validated questionnaires to collect, process, and align PROMs and clinical information [ 48 ]. Heartbeat is a German web application that allows the collection and analysis of PROMs, and it can be integrated with external clinical records through the FHIR (Fast Healthcare Interoperability Resources) clinical message exchange standard [ 49 ]. Zedoc PROM, which is a platform for the integral management of PROMs, has integration with external systems through FHIR and has the support of Logical Observation Identifiers Names and Codes (LOINC) and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT), and it is available for New Zealand, Singapore, and Australia [ 50 ]. Force Therapeutics has integration with the American Joint Replacement Registry and other records, reserving the right to use the information registered on the platform [ 51 ].
Form Builder applications are useful for the development of surveys; however, they are designed for projects in which it is not necessary to have an integration. On the other hand, specific applications provide more functionalities than necessary, have associated payment for their use, and are not available in Chile, and currently, some reserve the right to use the data collected.
It can be said that from the technical point of view, there are advances that allow the development of technological tools for the collection of PROMs and PREMs, but from a patient-centered approach, it is essential that this development contemplates the needs of primary users, that is, patients and cancer professionals.
This work aims to present the results of a qualitative case study aimed at identifying the needs of cancer patients during the breast cancer treatment process in order to design a mobile app that allows the reporting of adverse symptoms and impacts, and that improves the quality of life. In particular, the research questions addressed in this study are as follows:
- What information does a patient need during the breast cancer treatment process?
- What information does a health care professional need from a patient during the breast cancer treatment process?
- What are the user roles that must interact with the application, so information needs are met?
To address these questions, we involved patients and health care professionals in a co-design process.
The research protocol was reviewed and approved by the Scientific Ethics Committee CEC Med-UC (3/2019).
Design and Sample
A qualitative case study was conducted to address the research questions. Participants were informed about the study and voluntarily signed the consent document. The study was carried out between June and July 2019. Data were collected though focus groups and a survey. The methodology of focus groups was used to survey the needs of 2 types of users: breast cancer patients and health professionals. We report the qualitative study according to the guidelines in [ 52 ].
The focus groups were conducted by the first author (CT), who was the principal investigator of the research project and has more than 15 years of experience in eHealth research. There was no relationship with the participants prior to the study. Participants had knowledge about the goals of the research and acknowledged the researcher and the research team as participants of the NCR project. The methodological orientation of the research was content analysis, as we aimed to identify common information need themes across the participants.
The participants of the focus groups (both with patients and health care professionals) were selected by convenience and were the patients or professionals of the Chilean Health Institution. In all focus groups, the participants were approached face-to-face. Two focus groups were conducted with cancer patients (n=4). One participant dropped out without giving further reasons. All participants were women aged 34 to 53 years. The group of health professionals included those who participated in the care of patients and belonged to the cancer committee of the cancer center where the study was conducted (n=9). No health care professionals dropped out from the focus groups, and only 4 surveys were collected after the activity.
Data were collected in the clinic, and only the participants and researchers were present during the activity. The cancer patients were divided into the following 2 groups: stage I, II, or III patients who were undergoing chemotherapy (n=3) and stage I, II, or III patients who had finished chemotherapy between 3 months and 1 year (n=1).
For the patient focus groups, the following 4 guiding questions were used: (1) What information about your disease would you like to have? (2) What information regarding your treatment would you like to have? (3) In what instances would you like to have feedback from a health professional during your treatment? (4) What symptoms have you reported to your doctor that have been caused by an adverse effect? Field notes were taken by the third author (RN). The first focus group with patients lasted 55 minutes, and the second one lasted 33 minutes. In the 2 cases, data saturation marked the end of the interviews.
For the health care professional focus group, the guiding question was as follows: What are the information needs of cancer patients during treatment? At the end of the group session, a written survey form was delivered, which was to be sent later to the research team and which inquired about the information that professionals required from patients and how important and feasible it was to provide the information requested by patients through a digital tool.
Two researchers analyzed the collected data. The data were analyzed following the content analysis method with an inductive approach [ 53 ]. One of the researchers coded the transcripts of the focus groups to obtain the subcategories. The abstraction process, that is, the grouping of the categories around less numerous and higher-level categories, was carried out jointly by the 2 researchers. The main categories emerged from grouping the generic categories into functional groups, which were then considered in the design process as modules of the system. In Figure 1 , we provide an example of the content analysis process.
The results are presented as answers for each of the research questions.
This group corresponds to patients with stage I, II, or III breast cancer who are being treated with chemotherapy or who have completed it. The information obtained from the guiding questions is summarized in Table 1 .
The group corresponds to health professionals who care for cancer patients participating in the study. One aspect that stood out in the focus groups with professionals was that they considered it necessary to provide patients with guidance on the administrative and financial process in which they are immersed. They also mentioned the need to have the clinical information of other professionals that patients consulted in the process of research, diagnosis, and treatment of the disease. In addition, all professionals who responded to the survey pointed out that it was important to provide patients with generic information on symptoms and intensity that should be reported by the patients with different levels of urgency and provide information on daily activities for well-being (diet and physical exercise). Most professionals pointed out that it was important to respond to private clinical questions (chat or private message) and public questions (forum or social network) of patients through a digital means of communication and deliver nonclinical data for the well-being of cancer patients (activities, information on turbans and bras, etc). On a scale of 1 to 5 (where 1 indicates little disagreement and 5 indicates strong agreement), the importance of providing information on complementary therapies to clinical treatments was scored 5 by a professional, and the other 3 professionals scored it 3. In the responses, the majority of respondents indicated (scores 4 and 5) that actions qualified as important were feasible, except for the item of answering private clinical questions, where all professionals scored it 2 or 3. On the other hand, among 12 symptoms presented to determine the adverse effect of a treatment, all professionals agreed on one, pain, as an important symptom to report. Likewise, all professionals who responded to the survey agreed that for adverse effects, it is necessary to know the temporality, severity, and intensity.
Proposal for the Design of the App
From the collection of information with patients and cancer professionals, 4 areas of information needs were detected: (1) knowledge regarding the disease in general, (2) feedback for the reporting of symptoms, (3) support in administrative processes and (4) complementary information. The solution proposal consists of a mobile app called + Contigo (Spanish word meaning +With you), whose functionality will be described considering the 4 modules that compose it, according to the actors involved. Based on this, the prototype design for the main user interfaces and the design of the system architecture, including its components and deployment, will be shown.
The solution in +Contigo proposes 4 modules with different modalities of use: (1) Clinical information, (2) Report and assistance, (3) Administrative guide, and (4) Community, which are oriented to different actors or types of users. It is considered that a large part of the actions will be carried out during the process of implementation of the solution (pilot period and white gear). Figure 2 shows the diagram of the + Contigo app.
Actors can access the 4 functional modules of +Contigo . Both users are required to register in the system, and caregivers are required to register the patients under their care. Although the system will know the user’s identity, it will be protected for any interaction with other actors in the registry, unless the user explicitly authorizes its use.
The mobile actor will be involved only with the module and complementary information unit and will be able to contribute information to the discussions of the patients’ and caregivers’ community, as well as edit or delete published information that may be considered harmful to users from a clinical or quality of life point of view.
The professional actor of the avalanche will be directly related to the reporting and assistance module and will receive notifications from patients, which will be classified by priority and severity, being able to deliver online responses (audios, texts, images, videos, or reference links) or other types of actions outside the system (telephone contact, schedule regular control, or schedule urgent control attention).
The administrator has a central role in the + Contigo management process consisting of configuring the system and providing multimedia content, with the ability to define the different levels in the clinical information module; configuring the different types of symptoms, the degrees of severity with their description, and the alert priority in the module of support and the system; and uploading multimedia content, adding questions, providing answers, adding checklist items in the module of administrative support, and generating discussion topics in the module of the community.
On downloading the app on a mobile device, users will be able to recognize +Contigo ( Figure 3 ), with the logo on the screen of the mobile device.
When entering the app, the user will see the login screen ( Figure 4 ) to start a secure session, which is protected by a password and a chosen username.
Figures 5 - 8 present the domain model of each module. This conceptual modeling represents the vocabulary of the system, as well as the relationships allowed between these concepts, with dependence (black rhombus) or without dependence (white rhombus) and with many relationships (1.*) or with possible relationships (0.*). Added text boxes with appended notes (dotted lines) that complement the information are delivered to clarify how the model will respond to certain functional requirements.
The clinical care process assistant module ( Figure 5 ) provides information to the patient regarding the level of care in which the patient is, using questions and answers that allow the patient to internalize the disease, the phases of the process, the treatments, and, in particular, chemotherapy.
The report and assistance module ( Figure 6 ) allows the user to consult detailed information on chemotherapy symptoms, indicating levels within expected ranges and those that are out of normality, and allowing symptoms to be reported based on a scale of type and severity. This module enables health care professionals to interact with patients via text, audio, or video messages, or initiate a phone call to the patient’s mobile.
The administrative guide module ( Figure 7 ) is an informative module with a focus of questions and answers, together with a checklist to guide the patient regarding the eventual and possible main procedures to be carried out in the process of the disease, which is associated with the different stages and substages of the care process.
The community module ( Figure 8 ) provides practical and everyday information for nonclinical aspects. This module is aimed at supporting the quality of life (eg, where to find clothing, support groups, and information and dissemination activities); posing as a social network, where users can use a fictitious or real name, with the objective that the community shares the information it considers necessary and relevant; and providing links to external resources, comments, photographs, and audio or video messages.
A prototype design for the main user interface and the design of the + Contigo architecture are presented, including its components and deployment to finally propose the data model.
With the main version of the first level of the clinical information module ( Figure 9 ), the user can access information on the whole process, patient diagnosis, multidisciplinary case analysis, cancer staging, treatment, and case tracking. From the bottom menu, the user can quickly access information and other modules such as port, administrative guide, and community.
The sublevel of the clinical information module ( Figure 10 ) displays multimedia information (text, audio, and video) of the stage of the clinical care process selected in the main view (in this case, Treatment), where the active section of the interface in which the user is located is identified through a color change.
The report and assistance module ( Figure 11 ) is accessible directly from “Report” in the bottom menu. Here, the patient can choose what aspects they want to report (health outcomes, care experience, symptoms, or financial burden).
The report of symptoms related to the disease or treatment ( Figure 12 ) is made by indicating severity through a scale represented in a set of 5 selectable faces (radio button) that can be accompanied by complementary information through a voice message by pressing the microphone icon, which activates this function on the phone.
Answers to frequently asked questions in the administrative guide module ( Figure 13 ) are available directly from “Assistant” in the menu below. Here, administrative information is provided using the approach questions, response, and checklist to guide the patient regarding the main procedures to be carried out. When a question is selected from the list, the answer is displayed, and when it is selected again, it collapses, hiding the answer.
The recording of voice notes of the patient in the administrative guide module ( Figure 14 ) is accessible directly from “Assistant” in the lower menu, showing the answer to a selected question in this case, which corresponds to a list of steps, among which the patient can identify those that were fulfilled or not. In addition, an audio can be incorporated into each response by pressing the microphone icon.
Figure 15 exemplifies some possible topics of conversation in the community module, which is accessible directly from the bottom menu. The interface groups several categories, and each has different conversation topics that can vary, renew, and expire, depending on the interests of the participants. The functionality of this section is like a conversation chat, but a moderator (health professional) has been put in charge of reviewing the material before its publication to ensure that the suggestions and recommendations to be discussed in the community do not affect the quality of life of the participating patients.
A 3-layer architecture is proposed for the development of the app, considering microservices, that is, monolithic apps (with their self-contained server), and with the scope limited to a subset of the data model (consistent with the concepts of the domain model). To ensure portability, the development of a progressive web application compatible with different mobile operating systems (Android and iOS) is proposed, with development in Angular 8. It is proposed to use the same server and database management system that the organization already has to implement the data tier. The main architectural definitions are specified in Table 2 .
Table 3 shows the quality attributes that the app must have.
a N/A: not applicable.
a API: application programming interface.
Requirements for Reporting Health Experiences
While various applications for reporting health experiences and outcomes are available, some are general [ 38 - 43 ], which can be adapted for PREM and PROM purposes, but do not allow integration, and others are specific [ 44 - 51 ], but provide more functionality than necessary, have associated payment, and are not available in many countries, with some reserving the right to use the collected data. In this sense, our study managed to identify 4 categories of requirements, generating a mobile app design based on the responses of primary users (patients and cancer professionals). Although the sample corresponded to patients with breast cancer, it is estimated that +Contigo would be useful for patients with other types of cancers.
It is currently recognized that cancer is not only a public health problem, but also a sociohealth, social, and economic problem, which affects the patient, family, and community [ 4 ]. Therefore, it is necessary to know the demands and needs of patients and carry out all actions that tend to reduce waiting and ignorance of this disease, given that many patients have a delayed diagnosis (23.57%), being largely the cause of health system inefficiencies (79.03%) [ 54 ]. New technologies can be a means to bridge inequity gaps [ 55 ] and a means to access the information that patients and clinicians need, and safeguard the methods, standards, processes, and tools that have been reported in the literature to assess the quality of health information systems [ 56 , 57 ].
Threats to Validity
According to Wohlin et al [ 58 ], we analyzed the threats to the study’s internal, construct, and external validity. Internal validity refers to the existence of other elements affecting the observed results. As in any case study, the specific context (in this case, the cancer center) might affect the results. Nevertheless, our approach is not explanatory but exploratory, so the results are meant to be interpreted in the context of the case study provided in the Introduction. Construct validity deals with the degree to which the measurements reflect what the researchers have in mind and help answer the research questions. We addressed this threat by designing the questions for patients and health professionals beforehand and explaining them to the subjects during the focus groups. The survey questions were designed to answer research questions 1 and 2, while the third research question came up from the requirements analysis. External validity addresses to what extent it is possible to generalize the findings. The sample size and the scope of the study certainly limit the generalizability of the results. However, as in other case studies, we intend to provide enough detail for the audience to extend the results to cases with common characteristics and for which the findings are relevant [ 59 ]. Finally, the conclusion validity in case studies mainly deals with the reliability of the measures. We addressed this threat by providing details of the content analysis and the categorization of the findings; however, different researchers might come up with different categories by following the same analysis procedure.
We evaluated the app’s design through a heuristic evaluation [ 60 ], where trained evaluators identified potential usability problems by reviewing the app or prototypes through the application of a set of design best practices called heuristics. Three information system and user interaction design experts from Chile and Spain performed the assessment. The evaluators reviewed the front end of the patient and caregiver mobile app and mockups of the apps for the health professional and social network moderator. The evaluators received a high-level description of the app features (described in the Modules section of this paper) and the app prototypes. The evaluators were asked to base their review on the heuristics for mobile apps by Bertini et al [ 61 ]. The reported problems indicated the function and role of the user, the heuristics involved, and their severity, which was classified as low, medium, and high. Evaluators were asked to provide recommendations for the improvement of each.
A total of 62 usability problems were identified by the experts, and 18 of them had high severity. Among the most severe issues, the more compromised heuristics were “Ease of input, screen readability, and glanceability” (7 issues), “Esthetic, privacy, and social conventions” (6 issues), and “Consistency and mapping” (5 issues). The features with more severe issues were patients’ symptoms and experience with reporting. Table 4 presents the features with high severity problems.
Some of the most severe issues were related to the patient home screen and the disease information features, which was named “My Journey” in the app. The “journey” feature name and metaphor were directly elicited from the focus groups since patients wanted to avoid seeing cancer or disease in a daily use app. Reviewer 1 from Spain mentioned that the “My Journey” feature was not intuitive for finding information about the disease and treatment. We think this is due to idiosyncratic differences between Chile and Spain, and is consistent with the heuristic “Esthetic, privacy, and social conventions” in [ 61 ], which enforces taking into account the social and emotional aspects of the system. Regarding the results and experience reporting features, both Reviewer 2 and Reviewer 3 reported issues related to the heuristic “Ease of input, screen readability, and glanceability.” Given the length of the questionnaires, the reviewers recommended separating them into subsections, which would provide visibility on the questionnaires’ completion status, and autosaving the responses to avoid information loss. Regarding providing assistance for patient queries on the normality of their symptoms, Reviewer 1 and Reviewer 3 reported that the system should clearly separate the interaction with an automatic assistant or a health professional.
After the evaluation process, the researchers analyzed the problems graded with high severity. The evaluators’ recommendations for these problems were studied and grouped by the compromised heuristics and the associated features. From the information requirements identified in the case study and the heuristic evaluation, we present a set of guidelines for designing a mobile app for self-reporting results and experiences among breast cancer patients ( Table 5 ). The guidelines consider the information requirements that must address the app and the usability considerations.
This study has allowed the characterization of the needs of the primary actors involved (patients and cancer health care professionals). The findings show that the main information needs are related to generic information on symptoms and intensity, which should be reported according to different levels of urgency; guidance on the administrative process; nonclinical data that contribute to the well-being and daily comfort of patients (activities, data on turbans and bras, etc); and daily activities for well-being (diet and physical exercise). The satisfaction of the needs must be supported by different mobile app features, including multimedia information on the disease and treatments, interactive forms and query mechanisms to report symptoms, experiences and requirements of assistance, and a social network for enabling community support for day-to-day affairs. The information needs require the collaboration of the community of patients, caregivers, health care professionals, and social network moderators. The heuristic evaluation of the user interface reveals that the app must consider delivering disease and treatment information taking into account the emotional effect on patients as well as their social conventions, and address the length of some questionnaires (particularly symptoms and experiences) with usability best practices.
Future research will focus on empirically assessing the effects of the study on patients’ symptoms and experience reporting. We aim to evaluate how using the app’s features correlates with patients’ reporting behaviors. Results from the heuristic evaluation suggest that more research is needed to underpin the cultural nuances of providing information about the disease considering patients’ social conventions.
This work was funded by the ANID FONDAP (152220002), and the Center for the Prevention and Control of Cancer (CECAN).
Conflicts of Interest
- Cancer: Facts and figures. World Health Organization. URL: https://www.who.int/news-room/fact-sheets/detail/cancer [accessed 2022-09-07]
- Registro Nacional de Cáncer: en qué consiste la nueva plataforma para registrar a pacientes con la enfermedad a nivel nacional (National Cancer Registry: what is the new platform to register patients with the disease nationwide). El Mostrador. URL: https://www.elmostrador.cl/agenda-pais/2020/11/18/registro-nacional-de-cancer-en-que-consiste-la-nueva-plataforma-para-registrar-a-pacientes-con-la-enfermedad-a-nivel-nacional/ [accessed 2022-09-07]
- Martínez-Sanguinetti MA, Leiva-Ordoñez AM, Petermann-Rocha F, Celis-Morales C. ¿Cómo ha cambiado el perfil epidemiológico en Chile en los últimos 10 años? Rev. Méd. Chile 2021 Jan;149(1):149-152 [ CrossRef ]
- Plan Nacional de Cáncer 2018-2028 (National Cancer Plan 2018-2028). Ministerio de Salud. URL: https://www.minsal.cl/wp-content/uploads/2019/01/2019.01.23_PLAN-NACIONAL-DE-CANCER_web.pdf [accessed 2022-03-03]
- National Cancer Act. Library of the National Congress. Law 21.258. Biblioteca del Congreso Nacional de Chile. URL: https://www.bcn.cl/leychile/navegar?idNorma=1149004 [accessed 2022-03-03]
- Projects. National Cancer Registry. Carla Taramasco. URL: https://carlataramasco.cl/ [accessed 2022-06-20]
- Villalobos Dintrans P, Hasen F, Izquierdo C, Santander S. [New challenges for health planning: Chile's National Cancer PlanNovos desafios para o planejamento no setor da saúde: o Plano Nacional do Câncer no Chile]. Rev Panam Salud Publica 2020;44:e6 [ https://iris.paho.org/handle/10665.2/51803 ] [ CrossRef ] [ Medline ]
- Marco de aplicación: Orientación para los sistemas y servicios (Application framework: guidance for systems and services). World Health Organization. URL: https://iris.who.int/bitstream/handle/10665/337374/%209789240014718-spa.pdf?isAllowed=y&sequence=1 [accessed 2022-09-07]
- Howell D, Molloy S, Wilkinson K, Green E, Orchard K, Wang K, et al. Patient-reported outcomes in routine cancer clinical practice: a scoping review of use, impact on health outcomes, and implementation factors. Ann Oncol 2015 Sep;26(9):1846-1858 [ https://linkinghub.elsevier.com/retrieve/pii/S0923-7534(19)31752-1 ] [ CrossRef ] [ Medline ]
- Schougaard L, Larsen L, Jessen A, Sidenius P, Dorflinger L, de Thurah A, et al. AmbuFlex: tele-patient-reported outcomes (telePRO) as the basis for follow-up in chronic and malignant diseases. Qual Life Res 2016 Mar;25(3):525-534 [ https://europepmc.org/abstract/MED/26790427 ] [ CrossRef ] [ Medline ]
- Fjell M, Langius-Eklöf A, Nilsson M, Wengström Y, Sundberg K. Reduced symptom burden with the support of an interactive app during neoadjuvant chemotherapy for breast cancer - A randomized controlled trial. Breast 2020 Jun;51:85-93 [ https://linkinghub.elsevier.com/retrieve/pii/S0960-9776(20)30083-7 ] [ CrossRef ] [ Medline ]
- Lavallee D, Chenok K, Love R, Petersen C, Holve E, Segal C, et al. Incorporating Patient-Reported Outcomes Into Health Care To Engage Patients And Enhance Care. Health Aff (Millwood) 2016 Apr;35(4):575-582 [ https://doi.org/10.1377/hlthaff.2015.1362 ] [ CrossRef ] [ Medline ]
- Jongerius C, Russo S, Mazzocco K, Pravettoni G. Research-Tested Mobile Apps for Breast Cancer Care: Systematic Review. JMIR Mhealth Uhealth 2019 Feb 11;7(2):e10930 [ https://air.unimi.it/handle/2434/653758 ] [ CrossRef ] [ Medline ]
- Akingbade O, Nguyen K, Chow K. Effect of mHealth interventions on psychological issues experienced by women undergoing chemotherapy for breast cancer: A systematic review and meta-analysis. J Clin Nurs 2023 Jul;32(13-14):3058-3073 [ https://doi.org/10.1111/jocn.16533 ] [ CrossRef ] [ Medline ]
- Zhu H, Chen X, Yang J, Wu Q, Zhu J, Chan SW. Mobile Breast Cancer e-Support Program for Chinese Women With Breast Cancer Undergoing Chemotherapy (Part 3): Secondary Data Analysis. JMIR Mhealth Uhealth 2020 Sep 16;8(9):e18896 [ https://mhealth.jmir.org/2020/9/e18896/ ] [ CrossRef ] [ Medline ]
- Criterios Técnicos para Programación Modelo de Atención Oncológica (Technical Criteria for Programming Oncological Care Model). Ministerio de Salud. URL: https://www.minsal.cl/wp-content/uploads/2020/09/ANEXO-21.pdf [accessed 2022-10-22]
- ORIENTACIÓN TÉCNICA PARA EL MANEJO INTEGRAL DE LA PERSONA CON CÁNCER Y SU FAMILIA. Ministerio de Salud. URL: https://redcronicas.minsal.cl/wp-content/uploads/2018/09/2018.09.10_OT-MANEJO-PERSONA-CON-CANCER-Y-SU-FAMILIA.pdf [accessed 2023-01-10]
- SEGUNDO INFORME NACIONAL DE VIGILANCIA DE CÁNCER EN CHILE: Estimación de Incidencia (Second national report on cancer surveillance in Chile: Incidence estimation). Ministerio de Salud. URL: http://epi.minsal.cl/wp-content/uploads/2020/08/VF_Informe_RPC_Estimacion_Incidencia.pdf [accessed 2022-07-01]
- Estudios de la OCDE sobre Salud Pública, Chile. HACIA UN FUTURO MÁS SANO: EVALUACIÓN Y RECOMENDACIONES (OECD Studies on Public Health, Chile. Towards a healthier future: Evaluation and recommendations). OECD. URL: https://www.oecd.org/health/health-systems/Revisi%C3%B3n-OCDE-de-Salud-P%C3%BAblica-Chile-Evaluaci%C3%B3n-y-recomendaciones.pdf [accessed 2022-07-01]
- Kingsley C, Patel S. Patient-reported outcome measures and patient-reported experience measures. BJA Education 2017 Apr;17(4):137-144 [ https://doi.org/10.1093/bjaed/mkw060 ] [ CrossRef ]
- EORTC QLQ-C30. European Organization for Research and Treatment of Cancer. URL: https://www.eortc.org/app/uploads/sites/2/2018/08/Specimen-QLQ-C30-English.pdf [accessed 2022-09-01]
- DiRisio A, Harary M, van Westrhenen A, Nassr E, Ermakova A, Smith T, et al. Quality of reporting and assessment of patient-reported health-related quality of life in patients with brain metastases: a systematic review. Neurooncol Pract 2018 Nov;5(4):214-222 [ https://europepmc.org/abstract/MED/31386015 ] [ CrossRef ] [ Medline ]
- EORTC QLQ - PAN26. European Organisation for Research and Treatment of Cancer. URL: https://www.eortc.org/app/uploads/sites/2/2018/08/Specimen-PAN26-English.pdf [accessed 2022-07-01]
- Maharaj A, Samoborec S, Evans SM, Zalcberg J, Neale R, Goldstein D, et al. Patient-reported outcome measures (PROMs) in pancreatic cancer: a systematic review. HPB (Oxford) 2020 Feb;22(2):187-203 [ https://linkinghub.elsevier.com/retrieve/pii/S1365-182X(19)30711-7 ] [ CrossRef ] [ Medline ]
- Moss C, Aggarwal A, Qureshi A, Taylor B, Guerrero-Urbano T, Van Hemelrijck M. An assessment of the use of patient reported outcome measurements (PROMs) in cancers of the pelvic abdominal cavity: identifying oncologic benefit and an evidence-practice gap in routine clinical practice. Health Qual Life Outcomes 2021 Jan 15;19(1):20 [ https://hqlo.biomedcentral.com/articles/10.1186/s12955-020-01648-x ] [ CrossRef ] [ Medline ]
- Kargo A, Coulter A, Jensen P, Steffensen K. Proactive use of PROMs in ovarian cancer survivors: a systematic review. J Ovarian Res 2019 Jul 15;12(1):63 [ https://ovarianresearch.biomedcentral.com/articles/10.1186/s13048-019-0538-9 ] [ CrossRef ] [ Medline ]
- Meryk A, Kropshofer G, Hetzer B, Riedl D, Lehmann J, Rumpold G, et al. Use of Daily Patient-Reported Outcome Measurements in Pediatric Cancer Care. JAMA Netw Open 2022 Jul 01;5(7):e2223701 [ https://europepmc.org/abstract/MED/35881395 ] [ CrossRef ] [ Medline ]
- Lentz R, Benson A, Kircher S. Financial toxicity in cancer care: Prevalence, causes, consequences, and reduction strategies. J Surg Oncol 2019 Jul;120(1):85-92 [ https://doi.org/10.1002/jso.25374 ] [ CrossRef ] [ Medline ]
- Witte J, Mehlis K, Surmann B, Lingnau R, Damm O, Greiner W, et al. Methods for measuring financial toxicity after cancer diagnosis and treatment: a systematic review and its implications. Ann Oncol 2019 Jul 01;30(7):1061-1070 [ https://linkinghub.elsevier.com/retrieve/pii/S0923-7534(19)31244-X ] [ CrossRef ] [ Medline ]
- COST: A FACIT Measure of Financial Toxicity. FACIT. URL: https://www.facit.org/measures/FACIT-COST [accessed 2022-09-07]
- Reeve B, Wyrwich K, Wu A, Velikova G, Terwee C, Snyder C, et al. ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res 2013 Oct;22(8):1889-1905 [ https://doi.org/10.1007/s11136-012-0344-y ] [ CrossRef ] [ Medline ]
- de Souza J, Yap B, Hlubocky F, Wroblewski K, Ratain M, Cella D, et al. The development of a financial toxicity patient-reported outcome in cancer: The COST measure. Cancer 2014 Oct 15;120(20):3245-3253 [ https://onlinelibrary.wiley.com/doi/10.1002/cncr.28814 ] [ CrossRef ] [ Medline ]
- Meirte J, Hellemans N, Anthonissen M, Denteneer L, Maertens K, Moortgat P, et al. Benefits and Disadvantages of Electronic Patient-reported Outcome Measures: Systematic Review. JMIR Perioper Med 2020 Apr 03;3(1):e15588 [ https://periop.jmir.org/2020/1/e15588/ ] [ CrossRef ] [ Medline ]
- CARE Measure. URL: https://caremeasure.stir.ac.uk/ [accessed 2022-07-01]
- Pevernagie D, Bauters F, Hertegonne K. The Role of Patient-Reported Outcomes in Sleep Measurements. Sleep Med Clin 2021 Dec;16(4):595-606 [ https://linkinghub.elsevier.com/retrieve/pii/S1556-407X(21)00053-9 ] [ CrossRef ] [ Medline ]
- Lorca L, Sacomori C, Fasce Pineda G, Vidal Labra R, Cavieres Faundes N, Plasser Troncoso J, et al. Validation of the EORTC QLQ-ELD 14 questionnaire to assess the health-related quality of life of older cancer survivors in Chile. J Geriatr Oncol 2021 Jun;12(5):844-847 [ http://doi:10.1016/j.jgo.2020.12.014 ] [ CrossRef ] [ Medline ]
- Betancour P, Mueller B, Sola J, Arancibia J, Ascui R, Araya I, et al. Quality of life and preoperative chemotherapy in gastric cancer in Chile: results from the observational study of perioperative chemotherapy in gastric cancer (PRECISO). Annals of Oncology 2019 Jul;30:iv61 [ https://doi.org/10.1093/annonc ] [ CrossRef ]
- Form Builder. Teamscope. URL: https://www.teamscopeapp.com/features/form-builder [accessed 2022-07-01]
- How It Works. Survey CTO. URL: https://www.surveycto.com/product/how-it-works/ [accessed 2022-07-01]
- Electronic Patient Reported Outcomes (ePRO). Castor. URL: https://www.castoredc.com/epro/ [accessed 2022-09-07]
- KoBoToolbox. URL: https://www.kobotoolbox.org/ [accessed 2022-09-07]
- REDCap. URL: https://www.project-redcap.org/ [accessed 2022-09-07]
- ODK. URL: https://getodk.org/ [accessed 2022-09-07]
- Basch E, Reeve BB, Mitchell SA, Clauser SB, Minasian LM, Dueck AC, et al. Development of the National Cancer Institute's patient-reported outcomes version of the common terminology criteria for adverse events (PRO-CTCAE). J Natl Cancer Inst 2014 Sep 29;106(9):dju244-dju244 [ https://europepmc.org/abstract/MED/25265940 ] [ CrossRef ] [ Medline ]
- Automate your collection of patient-reported outcome measures (ePRO) and patient-reported experience measures and generate real-time reports. Buddy Healthcare. URL: https://www.buddyhealthcare.com/en/electronic-patient-reported-outcomes-collection [accessed 2022-09-07]
- Electronic Patient-Reported Outcomes. Patient IQ. URL: https://www.patientiq.io/platform/patient-reported-outcomes [accessed 2022-09-07]
- My Clinical Outcomes. URL: https://www.myclinicaloutcomes.com/ [accessed 2022-09-07]
- Outcome Measurement. Philips. URL: https://www.philips.com.my/healthcare/clinical-solutions/patient-reported-outcomes [accessed 2022-09-07]
- Heartbeat Medical. URL: https://heartbeat-med.com/#sgq0skwxtaeb12w41a6ro [accessed 2022-09-07]
- The Clinician. URL: https://theclinician.com/proms [accessed 2022-09-07]
- Force Therapeutics. URL: https://www.forcetherapeutics.com/why-force/ [accessed 2022-09-07]
- Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007 Dec;19(6):349-357 [ https://doi.org/10.1093/intqhc/mzm042 ] [ CrossRef ] [ Medline ]
- Elo S, Kyngäs H. The qualitative content analysis process. J Adv Nurs 2008 Apr;62(1):107-115 [ CrossRef ] [ Medline ]
- Identificando brechas en trayectorias terapéuticas de pacientes adultos con cáncer de mama y pulmón en chile develando desigualdades en la atención de patologías priorizadas. Universidad del Desarrollo. URL: https://repositorio.udd.cl/server/api/core/bitstreams/b2553d48-f376-436a-af28-77c35e647d6c/content [accessed 2022-12-10]
- Magna AAR, Allende-Cid H, Taramasco C, Becerra C, Figueroa RL. Application of Machine Learning and Word Embeddings in the Classification of Cancer Diagnosis Using Patient Anamnesis. IEEE Access 2020;8:106198-106213 [ http://10.1109/ ] [ CrossRef ]
- Noël R, Taramasco C, Márquez G. Standards, Processes, and Tools Used to Evaluate the Quality of Health Information Systems: Systematic Literature Review. J Med Internet Res 2022 Mar 08;24(3):e26577 [ https://www.jmir.org/2022/3/e26577/ ] [ CrossRef ] [ Medline ]
- Taramasco C, Figueroa K, Lazo Y, Demongeot J. Estimation of life expectancy of patients diagnosed with the most common cancers in the Valparaiso Region, Chile. Ecancermedicalscience 2017 Jan 17;11:713 [ https://europepmc.org/abstract/MED/28144287 ] [ CrossRef ] [ Medline ]
- Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A. Experimentation in Software Engineering. Berlin, Germany. Springer; 2012.
- Runeson P, Höst M. Guidelines for conducting and reporting case study research in software engineering. Berlin, Germany. Springer; 2009:131-164
- Nielsen J, Molich R. Heuristic evaluation of user interfaces. In: CHI '90: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1990 Presented at: SIGCHI Conference on Human Factors in Computing Systems; April 1-5, 1990; Seattle, Washington, USA p. 249-256 [ CrossRef ]
- Bertini E, Gabrielli S, Kimani S, Catarci T, Santucci G. Appropriating and assessing heuristics for mobile computing. In: AVI '06: Proceedings of the Working Conference on Advanced Visual Interfaces. 2006 Presented at: International Conference on Advanced Visual Interfaces; May 23-26, 2006; Venezia, Italy p. 119-126 [ CrossRef ]
Edited by T Leung, C Hoving; submitted 24.01.23; peer-reviewed by N Mungoli, I Iakovou; comments to author 03.04.23; revised version received 30.06.23; accepted 13.10.23; published 27.11.23
©Carla Taramasco, Carla Rimassa, René Noël, María Loreto Bravo Storm, César Sánchez. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 27.11.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Case study designs
Express yourself with a custom case study design created just for you by a professional designer. Need ideas? We’ve collected some amazing examples of case study images from our global community of designers. Get inspired and start planning the perfect case study design today.
HALLSTAR Poster design
WildJar - Website Design
Case Study Template for Environmental Nonprofit
CPA Firm needs Capability Statement for Federal Government clients
Case Study for Top Digital Agency
I created three client case studies in PowerPoint for Digital Agency Fathom. Designed in-line with Fathom's branding with a layout that's easy to be updated and edited for various types of client case studies. I also included editable icons to help highlight key statistics and facts. Each design was produced in three sizes, 16:9, 4:3 and US letter.
Need Refresh of Case Study Flyer
Template for a 2 page photocentric feature project template sheet for Heavy!
Ultra modern, clean & contemporary 2 page customer case ctudy
A clean modern case study design with the use of beautiful cartoon icons.
Case studies for economic development
A set of case studies for a technology company dealing with economic development of different states. Bold, bright colors, gradients and subtle illustrations with color coding for each section give it a fresh as opposed to too sober and technical look while still conveying seriousness of the topic.
Case Study Template For Microsoft Word
A Microsoft Word template that will allow the client to make a series of Customer (Case Studies) Stories that can be printed and provided to client's customers.
Case Study Design
Case Study design for a growth marketing consulting firm focusing on data analytics. B2B,
Case Study Template
Systems & Tools Connectivity Template
This Wiring diagram Template designed for future use, to simplify and visualize connections and relationships of the media platforms. This solution/layout/usage made the connectivity-Chain unequivocal for the viewer and even for the creator, who can create they clean and informational report in minutes. The template, the layout of the "board" and the elements are make possible to manage the complex data of each individual stations and they relations also with the overall status and segment. And makes also these clean also for the viewer in seconds, who immediattely get the information what he would like to know.
Case Study Template for Pandata Group
Pandata Group looking for a case studies template that they can use for promoting their recent project success in data and analytics. Also, they need to be consistent in look and appearance :)
Case study landing page
A well designed case study which will be easy flowing and match the website.
Very clear instructions was given by the client. Simple yet professional design.
Proposal Design layout design for MyLendingPal
Case study designs not a good fit? Try something else:
How to create your design.
If you want an amazing case study design that stands out from the competition, work with a professional designer. Find and hire a designer to make your vision come to life, or host a design contest and get ideas from designers around the world.
Start a contest
Designers from around the world pitch you ideas. You provide feedback, hone your favorites and choose a winner.
Start a project
Find the perfect designer to match your style and budget. Then collaborate one-on-one to create a custom design.
4.8 average from 37,564 customer reviews
What makes a good case study design?
A great design shows the world what you stand for, tells a story and makes people remember your brand. Graphic design communicates all of that through color, shape and other design elements. Learn how to make your case study design tell your brand’s story.
Graphic design trends Discover stunning trends and find out what's new in the world of graphic design… Keep reading
The 7 principles of design Graphic design adheres to rules that work beneath the surface of any great artwork. Learn all about them here… Keep reading
Fundamentals of color theory Color can have an immense power - if you know how to use it. Learn all about the fundamentals of color theory here… Keep reading
What Is A Case Study Research Design 9 min read
A case study research design is a research methodology that relies heavily on the use of one or more case studies as the primary data source. Case studies are in-depth investigations of a single individual, organization, event, or phenomenon. Case studies are often used in social science research, but they can be used in any field of research.
There are many different types of case studies, but all case studies have four basic components:
1. The case being studied
2. The context of the case
3. The methods used to gather data
4. The analysis and interpretation of the data
The case being studied is the focal point of the case study. The context of the case includes the historical, social, and cultural background of the case. The methods used to gather data include interviews, observations, and document analysis. The analysis and interpretation of the data is where the researcher draws conclusions from the data and makes recommendations.
There are several benefits of using a case study research design. First, case studies provide a more in-depth understanding of a single case than other research methods. Second, case studies can be used to explore a phenomenon from multiple perspectives. Third, case studies can be used to generate hypotheses for further research. Fourth, case studies can be used to develop new theories. And fifth, case studies can be used to provide recommendations for practice.
There are also several limitations of using a case study research design. First, case studies are time-consuming and expensive to conduct. Second, case studies can be biased if the researcher is not objective. Third, case studies can be limited to a single perspective. Fourth, case studies can be difficult to generalize to a larger population. And fifth, case studies can be difficult to replicate.
Despite these limitations, case studies remain a valuable research method for exploring a single case in depth.
Table of Contents
- 1 What is case study research design example?
- 2 What is case study research method?
- 3 What does case study mean in design?
- 4 What is case study in qualitative research design?
- 5 What are the 3 methods of case study?
- 6 How do you identify a case study?
- 7 What are the features of case study research design?
What is case study research design example?
A case study research design is a research approach that involves the study of an individual, group, or organization. Case study research is often used in the social sciences, but can be used in other disciplines as well.
There are several different types of case study research designs, but all involve the collection and analysis of data. The data collected can be qualitative or quantitative, and the analysis can be inductive or deductive.
The main purpose of case study research is to understand the complexities of a particular situation or phenomenon. Case study research can be used to generate new theories or to test existing theories.
Case study research is a valuable tool for researchers because it allows them to explore a situation or phenomenon in depth. Case study research can be used to gain a better understanding of a problem or to explore the feasibility of a new approach or solution.
Case study research is also a valuable tool for practitioners because it allows them to learn about a particular situation or problem and to develop strategies to address it.
There are several factors to consider when designing a case study research study. These factors include the purpose of the study, the research question, the population of interest, the data collection methods, and the analysis methods.
The purpose of the study should be clear and concise. The research question should be specific and focused. The population of interest should be clearly defined. The data collection methods should be appropriate for the research question and the population of interest. The analysis methods should be appropriate for the type of data that will be collected.
A well-designed case study research study can provide valuable insights into a particular situation or problem.
What is case study research method?
Case study research is a qualitative research method that focuses on understanding a particular case or set of cases. The case study method involves studying a specific case, or small number of cases, in depth in order to understand the complexities and nuances of the individual case(s). Case study research is often used in the social sciences to understand a particular phenomenon, issue, or problem.
One of the key benefits of case study research is that it can provide a detailed and in-depth understanding of a specific case or set of cases. This can be especially useful when investigating a complex issue or problem. Additionally, case study research can help to build a rich and detailed picture of the phenomenon being studied.
Another key benefit of case study research is that it can help to generate new hypotheses or theories. By exploring a case in depth, researchers may be able to identify new patterns or relationships that were not previously observed. This can help to generate new ideas and hypotheses for further research.
However, case study research also has some limitations. One key limitation is that it can be difficult to generalize findings from a case study to a larger population. Additionally, case study research can be time-consuming and expensive to conduct.
What does case study mean in design?
A case study is a research method that involves an in-depth examination of a specific case or group of cases. Case studies may be used to assess the effectiveness of a specific program, to examine a particular policy or to explore a particular phenomenon.
In the field of design, case studies can be used to examine the efficacy of a particular design solution, to explore how a particular design was implemented or to assess the impact of a design on its users or environment. By analyzing a case study, designers can learn from the successes and failures of others and apply these lessons to their own work.
When conducting a case study, designers should aim to answer the following questions:
-What was the problem or challenge that the design was intended to address?
-What were the goals of the design?
-What design solutions were considered and why did the designer choose the solution that was implemented?
-How was the design implemented and what were the results?
-What were the positive and negative aspects of the design?
-What lessons can be learned from the case study?
What is case study in qualitative research design?
Qualitative case studies are one of several approaches researchers can use when conducting qualitative research. They are used to explore a phenomenon in-depth and to understand the complexities of a situation. In a qualitative case study, the researcher interviews participants, observes them, and reviews documents to gather data. The researcher then analyzes this data to develop a detailed understanding of the phenomenon.
Qualitative case studies can be useful in a variety of contexts. For example, they can be used to understand the experiences of a particular group of people, to explore a complex issue, or to understand how a particular event or experience affected someone.
When designing a qualitative case study, there are a few things to keep in mind. First, the researcher should define the scope of the study and develop a research question or questions. Next, the researcher should select participants and develop a plan for gathering data. The researcher should also create an analysis plan, and finally, develop a way to present the findings.
Qualitative case studies can provide a rich and in-depth understanding of a phenomenon. They can be used to answer specific questions, to explore complex issues, or to understand the experiences of a particular group of people.
What are the 3 methods of case study?
There are three main methods of case study: the chronological case study, the comparative case study, and the cross-case study. Each has its own strengths and weaknesses, and is most effective when used for specific purposes.
The chronological case study is the most common type of case study. It follows the story of a particular case from beginning to end, telling the story of how the case developed and the decisions that were made along the way. This type of case study is useful for understanding the unfolding of a particular event or situation.
The comparative case study compares two or more cases in order to understand the similarities and differences between them. This type of case study is useful for understanding how different situations lead to different outcomes, or for understanding the impact of different variables on a particular outcome.
The cross-case study looks at a number of cases in order to identify patterns across them. This type of case study is useful for understanding how different factors interact to produce a particular outcome.
How do you identify a case study?
A case study is an in-depth examination of a single person, group, or event. It is a type of qualitative research that provides a detailed account of an individual’s or organization’s experience.
Case studies are often used in social work, psychology, and business education. They can be used to explore a problem, to test a hypothesis, or to investigate the effects of a particular event or situation.
When identifying a case study, it is important to consider the following factors:
Purpose: What is the purpose of the study?
Scope: What is the scope of the study?
Population: Who is the study population?
Methods: What methods will be used to collect data?
Analysis: What type of analysis will be conducted?
Results: What are the results of the study?
What are the features of case study research design?
A case study research design is a qualitative research methodology that involves the study of a single case or a small number of cases in depth. Case studies are often used in psychology, sociology, and business research, and can be used in other fields as well.
There are a number of features that distinguish case study research from other qualitative research designs. First, case studies involve in-depth analysis of a single case or a small number of cases. This means that the researcher spends a significant amount of time getting to know the case or cases, and gathering data from a variety of sources. Second, case studies often use a variety of data collection methods, including interviews, observations, and documents. This allows the researcher to gather a rich body of data that can be used to understand the case in depth. Third, case studies are often used to explore complex phenomena that cannot be studied in a laboratory setting. By studying a real-world case, the researcher can gain a deeper understanding of how the phenomenon works. Finally, case studies can be used to generate hypotheses that can be tested in future research.