Chat with us, powered by LiveChat Select a quantitative research study of interest from the literature and address the following components: Describe the purpose of the study. Describe the type of research design u - Writingforyou

Select a quantitative research study of interest from the literature and address the following components: Describe the purpose of the study. Describe the type of research design u

Select a quantitative research study of interest from the literature and address the following components:

  • Describe the purpose of the study.
  • Describe the type of research design used (e.g., experimental, quasi-experimental, correlational, causal-comparative).
  • Describe how reliability and validity were assessed.
  • Evaluate the reliability of the study.
  • Evaluate the internal validity of the study.
  • Evaluate the statistical conclusion validity of the study.
  • List potential biases or limitations of the study.

Length: 6 pages, not including title and reference pages

References: Include a minimum of 6 scholarly resources

The completed assignment should address all of the assignment  requirements, exhibit evidence of concept knowledge, and demonstrate  thoughtful consideration of the content presented in the course. The  writing should integrate scholarly resources, reflect academic  expectations and current APA standards (as required), and include a plagiarism report.

  • attachment

    StatisticalConclusionValiditysomecommonthreatsandsimpleremedies.pdf
  • attachment

    ThreatstotheVALIDITYofRESEARCH.pdf

 

ORIGINAL RESEARCH ARTICLE published: 29 August 2012

doi: 10.3389/fpsyg.2012.00325

Statistical conclusion validity: some common threats and simple remedies Miguel A. García-Pérez*

Facultad de Psicología, Departamento de Metodología, Universidad Complutense, Madrid, Spain

Edited by: Jason W. Osborne, Old Dominion University, USA

Reviewed by: Megan Welsh, University of Connecticut, USA David Flora, York University, Canada

*Correspondence: Miguel A. García-Pérez, Facultad de Psicología, Departamento de Metodología, Campus de Somosaguas, Universidad Complutense, 28223 Madrid, Spain. e-mail: [email protected]

The ultimate goal of research is to produce dependable knowledge or to provide the evi- dence that may guide practical decisions. Statistical conclusion validity (SCV) holds when the conclusions of a research study are founded on an adequate analysis of the data, gen- erally meaning that adequate statistical methods are used whose small-sample behavior is accurate, besides being logically capable of providing an answer to the research ques- tion. Compared to the three other traditional aspects of research validity (external validity, internal validity, and construct validity), interest in SCV has recently grown on evidence that inadequate data analyses are sometimes carried out which yield conclusions that a proper analysis of the data would not have supported. This paper discusses evidence of three common threats to SCV that arise from widespread recommendations or practices in data analysis, namely, the use of repeated testing and optional stopping without control of Type-I error rates, the recommendation to check the assumptions of statistical tests, and the use of regression whenever a bivariate relation or the equivalence between two variables is studied. For each of these threats, examples are presented and alternative practices that safeguard SCV are discussed. Educational and editorial changes that may improve the SCV of published research are also discussed.

Keywords: data analysis, validity of research, regression, stopping rules, preliminary tests

Psychologists are well aware of the traditional aspects of research validity introduced by Campbell and Stanley (1966) and fur- ther subdivided and discussed by Cook and Campbell (1979). Despite initial criticisms of the practically oriented and some- what fuzzy distinctions among the various aspects (see Cook and Campbell, 1979, pp. 85–91; see also Shadish et al., 2002, pp. 462–484), the four facets of research validity have gained recognition and they are currently covered in many textbooks on research methods in psychology (e.g., Beins, 2009; Good- win, 2010; Girden and Kabacoff, 2011). Methods and strate- gies aimed at securing research validity are also discussed in these and other sources. To simplify the description, construct validity is sought by using well-established definitions and mea- surement procedures for variables, internal validity is sought by ensuring that extraneous variables have been controlled and con- founds have been eliminated, and external validity is sought by observing and measuring dependent variables under natural con- ditions or under an appropriate representation of them. The fourth aspect of research validity, which Cook and Campbell called statistical conclusion validity (SCV), is the subject of this paper.

Cook and Campbell, 1979, pp. 39–50) discussed that SCV pertains to the extent to which data from a research study can rea- sonably be regarded as revealing a link (or lack thereof) between independent and dependent variables as far as statistical issues are concerned. This particular facet was separated from other factors acting in the same direction (the three other facets of validity) and includes three aspects: (1) whether the study has enough statistical

power to detect an effect if it exists, (2) whether there is a risk that the study will “reveal” an effect that does not actually exist, and (3) how can the magnitude of the effect be confidently esti- mated. They nevertheless considered the latter aspect as a mere step ahead once the first two aspects had been satisfactorily solved, and they summarized their position by stating that SCV “refers to inferences about whether it is reasonable to presume covaria- tion given a specified α level and the obtained variances” (Cook and Campbell, 1979, p. 41). Given that mentioning “the obtained variances” was an indirect reference to statistical power and men- tioning α was a direct reference to statistical significance, their position about SCV may have seemed to only entail consideration that the statistical decision can be incorrect as a result of Type-I and Type-II errors. Perhaps as a consequence of this literal inter- pretation, review papers studying SCV in published research have focused on power and significance (e.g., Ottenbacher, 1989; Otten- bacher and Maas, 1999), strategies aimed at increasing SCV have only considered these issues (e.g., Howard et al., 1983), and tutori- als on the topic only or almost only mention these issues along with effect sizes (e.g., Orme, 1991; Austin et al., 1998; Rankupalli and Tandon, 2010). This emphasis on issues of significance and power may also be the reason that some sources refer to threats to SCV as “any factor that leads to a Type-I or a Type-II error” (e.g., Girden and Kabacoff, 2011, p. 6; see also Rankupalli and Tandon, 2010, Section 1.2), as if these errors had identifiable causes that could be prevented. It should be noted that SCV has also occasionally been purported to reflect the extent to which pre-experimental designs provide evidence for causation (Lee, 1985) or the extent to which

www.frontiersin.org August 2012 | Volume 3 | Article 325 | 1

 

 

García-Pérez Statistical conclusion validity

meta-analyses are based on representative results that make the conclusion generalizable (Elvik, 1998).

But Cook and Campbell’s (1979, p. 80) aim was undoubtedly broader, as they stressed that SCV “is concerned with sources of random error and with the appropriate use of statistics and statisti- cal tests” (italics added). Moreover, Type-I and Type-II errors are an essential and inescapable consequence of the statistical decision theory underlying significance testing and, as such, the potential occurrence of one or the other of these errors cannot be prevented. The actual occurrence of them for the data on hand cannot be assessed either. Type-I and Type-II errors will always be with us and, hence, SCV is only trivially linked to the fact that research will never unequivocally prove or reject any statistical null hypothesis or its originating research hypothesis. Cook and Campbell seemed to be well aware of this issue when they stressed that SCV refers to reasonable inferences given a specified significance level and a given power. In addition, Stevens (1950, p. 121) forcefully empha- sized that“it is a statistician’s duty to be wrong the stated number of times,” implying that a researcher should accept the assumed risks of Type-I and Type-II errors, use statistical methods that guaran- tee the assumed error rates, and consider these as an essential part of the research process. From this position, these errors do not affect SCV unless their probability differs meaningfully from that which was assumed. And this is where an alternative perspective on SCV enters the stage, namely, whether the data were analyzed properly so as to extract conclusions that faithfully reflect what the data have to say about the research question. A negative answer raises concerns about SCV beyond the triviality of Type-I or Type- II errors. There are actually two types of threat to SCV from this perspective. One is when the data are subjected to thoroughly inad- equate statistical analyses that do not match the characteristics of the design used to collect the data or that cannot logically give an answer to the research question. The other is when a proper sta- tistical test is used but it is applied under conditions that alter the stated risk probabilities. In the former case, the conclusion will be wrong except by accident; in the latter, the conclusion will fail to be incorrect with the declared probabilities of Type-I and Type-II errors.

The position elaborated in the foregoing paragraph is well sum- marized in Milligan and McFillen’s (1984, p. 439) statement that “under normal conditions (. . .) the researcher will not know when a null effect has been declared significant or when a valid effect has gone undetected (. . .) Unfortunately, the statistical conclusion validity, and the ultimate value of the research, rests on the explicit control of (Type-I and Type-II) error rates.” This perspective on SCV is explicitly discussed in some textbooks on research methods (e.g., Beins, 2009, pp. 139–140; Goodwin, 2010, pp. 184–185) and some literature reviews have been published that reveal a sound failure of SCV in these respects.

For instance, Milligan and McFillen’s (1984, p. 438) reviewed evidence that “the business research community has succeeded in publishing a great deal of incorrect and statistically inade- quate research” and they dissected and discussed in detail four additional cases (among many others that reportedly could have been chosen) in which a breach of SCV resulted from gross mis- matches between the research design and the statistical analysis. Similarly, García-Pérez (2005) reviewed alternative methods to

compute confidence intervals for proportions and discussed three papers (among many others that reportedly could have been cho- sen) in which inadequate confidence intervals had been computed. More recently, Bakker and Wicherts (2011) conducted a thorough analysis of psychological papers and estimated that roughly 50% of published papers contain reporting errors, although they only checked whether the reported p value was correct and not whether the statistical test used was appropriate. A similar analysis carried out by Nieuwenhuis et al. (2011) revealed that 50% of the papers reporting the results of a comparison of two experimental effects in top neuroscience journals had used an incorrect statistical pro- cedure. And Bland and Altman (2011) reported further data on the prevalence of incorrect statistical analyses of a similar nature.

An additional indicator of the use of inadequate statistical pro- cedures arises from consideration of published papers whose title explicitly refers to a re-analysis of data reported in some other paper. A literature search for papers including in their title the terms “a re-analysis,” “a reanalysis,” “re-analyses,” “reanalyses,” or “alternative analysis” was conducted on May 3, 2012 in the Web of Science (WoS; http://thomsonreuters.com), which rendered 99 such papers with subject area “Psychology” published in 1990 or later. Although some of these were false positives, a sizeable num- ber of them actually discussed the inadequacy of analyses carried out by the original authors and reported the results of proper alter- native analyses that typically reversed the original conclusion. This type of outcome upon re-analyses of data are more frequent than the results of this quick and simple search suggest, because the information for identification is not always included in the title of the paper or is included in some other form: For a simple exam- ple, the search for the clause “a closer look” in the title rendered 131 papers, many of which also presented re-analyses of data that reversed the conclusion of the original study.

Poor design or poor sample size planning may, unbeknownst to the researcher, lead to unacceptable Type-II error rates, which will certainly affect SCV (as long as the null is not rejected; if it is, the probability of a Type-II error is irrelevant). Although insufficient power due to lack of proper planning has consequences on sta- tistical tests, the thread of this paper de-emphasizes this aspect of SCV (which should perhaps more reasonably fit within an alter- native category labeled design validity) and emphasizes the idea that SCV holds when statistical conclusions are incorrect with the stated probabilities of Type-I and Type-II errors (whether the lat- ter was planned or simply computed). Whether or not the actual significance level used in the research or the power that it had is judged acceptable is another issue, which does not affect SCV: The statistical conclusion is valid within the stated (or computed) error probabilities. A breach of SCV occurs, then, when the data are not subjected to adequate statistical analyses or when control of Type-I or Type-II errors is lost.

It should be noted that a further component was included into consideration of SCV in Shadish et al.’s (2002) sequel to Cook and Campbell’s (1979) book, namely, effect size. Effect size relates to what has been called a Type-III error (Crawford et al., 1998), that is, a statistically significant result that has no meaningful practical implication and that only arises from the use of a huge sample. This issue is left aside in the present paper because adequate con- sideration and reporting of effect sizes precludes Type-III errors,

Frontiers in Psychology | Quantitative Psychology and Measurement August 2012 | Volume 3 | Article 325 | 2

 

 

García-Pérez Statistical conclusion validity

although the recommendations of Wilkinson and The Task Force on Statistical Inference (1999) in this respect are not always fol- lowed. Consider, e.g., Lippa’s (2007) study of the relation between sex drive and sexual attraction. Correlations generally lower than 0.3 in absolute value were declared strong as a result of p values below 0.001. With sample sizes sometimes nearing 50,000 paired observations, even correlations valued at 0.04 turned out signifi- cant in this study. More attention to effect sizes is certainly needed, both by researchers and by journal editors and reviewers.

The remainder of this paper analyzes three common practices that result in SCV breaches, also discussing simple replacements for them.

STOPPING RULES FOR DATA COLLECTION WITHOUT CONTROL OF TYPE-I ERROR RATES The asymptotic theory that provides justification for null hypoth- esis significance testing (NHST) assumes what is known as fixed sampling, which means that the size n of the sample is not itself a random variable or, in other words, that the size of the sample has been decided in advance and the statistical test is performed once the entire sample of data has been collected. Numerous procedures have been devised to determine the size that a sample must have according to planned power (Ahn et al., 2001; Faul et al., 2007; Nisen and Schwertman, 2008; Jan and Shieh, 2011), the size of the effect sought to be detected (Morse, 1999), or the width of the confidence intervals of interest (Graybill, 1958; Boos and Hughes- Oliver,2000; Shieh and Jan,2012). For reviews, see Dell et al. (2002) and Maxwell et al. (2008). In many cases, a researcher simply strives to gather as large a sample as possible. Asymptotic theory supports NHST under fixed sampling assumptions, whether or not the size of the sample was planned.

In contrast to fixed sampling, sequential sampling implies that the number of observations is not fixed in advance but depends by some rule on the observations already collected (Wald, 1947; Anscombe, 1953; Wetherill, 1966). In practice, data are analyzed as they come in and data collection stops when the observations collected thus far satisfy some criterion. The use of sequential sampling faces two problems (Anscombe, 1953, p. 6): (i) devis- ing a suitable stopping rule and (ii) finding a suitable test statistic and determining its sampling distribution. The mere statement of the second problem evidences that the sampling distribution of conventional test statistics for fixed sampling no longer holds under sequential sampling. These sampling distributions are rela- tively easy to derive in some cases, particularly in those involving negative binomial parameters (Anscombe, 1953; García-Pérez and Núñez-Antón, 2009). The choice between fixed and sequential sampling (sometimes portrayed as the “experimenter’s intention”; see Wagenmakers, 2007) has important ramifications for NHST because the probability that the observed data are compatible (by any criterion) with a true null hypothesis generally differs greatly across sampling methods. This issue is usually bypassed by those who look at the data as a “sure fact” once collected, as if the sam- pling method used to collect the data did not make any difference or should not affect how the data are interpreted.

There are good reasons for using sequential sampling in psycho- logical research. For instance, in clinical studies in which patients are recruited on the go, the experimenter may want to analyze

data as they come in to be able to prevent the administration of a seemingly ineffective or even hurtful treatment to new patients. In studies involving a waiting-list control group, individuals in this group are generally transferred to an experimental group mid- way along the experiment. In studies with laboratory animals, the experimenter may want to stop testing animals before the planned number has been reached so that animals are not wasted when an effect (or the lack thereof) seems established. In these and anal- ogous cases, the decision as to whether data will continue to be collected results from an analysis of the data collected thus far, typ- ically using a statistical test that was devised for use in conditions of fixed sampling. In other cases, experimenters test their statistical hypothesis each time a new observation or block of observations is collected, and continue the experiment until they feel the data are conclusive one way or the other. Software has been developed that allows experimenters to find out how many more observations will be needed for a marginally non-significant result to become significant on the assumption that sample statistics will remain invariant when the extra data are collected (Morse, 1998).

The practice of repeated testing and optional stopping has been shown to affect in unpredictable ways the empirical Type-I error rate of statistical tests designed for use under fixed sampling (Anscombe, 1954; Armitage et al., 1969; McCarroll et al., 1992; Strube, 2006; Fitts, 2011a). The same holds when a decision is made to collect further data on evidence of a marginally (non) significant result (Shun et al., 2001; Chen et al., 2004). The inac- curacy of statistical tests in these conditions represents a breach of SCV, because the statistical conclusion thus fails to be incorrect with the assumed (and explicitly stated) probabilities of Type-I and Type-II errors. But there is an easy way around the inflation of Type-I error rates from within NHST, which solves the threat to SCV that repeated testing and optional stopping entail.

In what appears to be the first development of a sequential procedure with control of Type-I error rates in psychology, Frick (1998) proposed that repeated statistical testing be conducted under the so-called COAST (composite open adaptive sequen- tial test) rule: If the test yields p < 0.01, stop collecting data and reject the null; if it yields p > 0.36, stop also and do not reject the null; otherwise, collect more data and re-test. The low crite- rion at 0.01 and the high criterion at 0.36 were selected through simulations so as to ensure a final Type-I error rate of 0.05 for paired-samples t tests. Use of the same low and high criteria rendered similar control of Type-I error rates for tests of the product-moment correlation, but they yielded slightly conserv- ative tests of the interaction in 2× 2 between-subjects ANOVAs. Frick also acknowledged that adjusting the low and high criteria might be needed in other cases, although he did not address them. This has nevertheless been done by others who have modified and extended Frick’s approach (e.g., Botella et al., 2006; Ximenez and Revuelta, 2007; Fitts, 2010a,b, 2011b). The result is sequential pro- cedures with stopping rules that guarantee accurate control of final Type-I error rates for the statistical tests that are more widely used in psychological research.

Yet, these methods do not seem to have ever been used in actual research, or at least their use has not been acknowledged. For instance, of the nine citations to Frick’s (1998) paper listed in WoS as of May 3, 2012, only one is from a paper (published in 2011) in

www.frontiersin.org August 2012 | Volume 3 | Article 325 | 3

 

 

García-Pérez Statistical conclusion validity

which the COAST rule was reportedly used, although unintend- edly. And not a single citation is to be found in WoS from papers reporting the use of the extensions and modifications of Botella et al. (2006) or Ximenez and Revuelta (2007). Perhaps researchers in psychology invariably use fixed sampling, but it is hard to believe that “data peeking” or “data monitoring” was never used, or that the results of such interim analyses never led researchers to collect some more data. Wagenmakers (2007, p. 785) regretted that “it is not clear what percentage of p values reported in experimen- tal psychology have been contaminated by some form of optional stopping. There is simply no information in Results sections that allows one to assess the extent to which optional stopping has occurred.” This incertitude was quickly resolved by John et al. (2012). They surveyed over 2000 psychologists with highly reveal- ing results: Respondents affirmatively admitted to the practices of data peeking, data monitoring, or conditional stopping in rates that varied between 20 and 60%.

Besides John et al.’s (2012) proposal that authors disclose these details in full and Simmons et al.’s (2011) proposed list of require- ments for authors and guidelines for reviewers, the solution to the problem is simple: Use strategies that control Type-I error rates upon repeated testing and optional stopping. These strate- gies have been widely used in biomedical research for decades (Bauer and Köhne, 1994; Mehta and Pocock, 2011). There is no reason that psychological research should ignore them and give up efficient research with control of Type-I error rates, particularly when these strategies have also been adapted and further devel- oped for use under the most common designs in psychological research (Frick, 1998; Botella et al., 2006; Ximenez and Revuelta, 2007; Fitts, 2010a,b).

It should also be stressed that not all instances of repeated test- ing or optional stopping without control of Type-I error rates threaten SCV. A breach of SCV occurs only when the conclu- sion regarding the research question is based on the use of these practices. For an acceptable use, consider the study of Xu et al. (2011). They investigated order preferences in primates to find out whether primates preferred to receive the best item first rather than last. Their procedure involved several experiments and they declared that “three significant sessions (two-tailed binomial tests per session, p < 0.05) or 10 consecutive non-significant sessions were required from each monkey before moving to the next experiment. The three significant sessions were not necessarily consecutive (. . .) Ten consecutive non-significant sessions were taken to mean there was no preference by the monkey” (p. 2304). In this case, the use of repeated testing with optional stopping at a nominal 95% significance level for each individual test is part of the operational definition of an outcome variable used as a criterion to proceed to the next experiment. And, in any event, the overall probability of misclassifying a monkey according to this criterion is certainly fixed at a known value that can eas- ily be worked out from the significance level declared for each individual binomial test. One may object to the value of the resul- tant risk of misclassification, but this does not raise concerns about SCV.

In sum, the use of repeated testing with optional stopping threatens SCV for lack of control of Type-I and Type-II error rates. A simple way around this is to refrain from these practices

and adhere to the fixed sampling assumptions of statistical tests; otherwise, use the statistical methods that have been developed for use with repeated testing and optional stopping.

PRELIMINARY TESTS OF ASSUMPTIONS To derive the sampling distribution of test statistics used in para- metric NHST, some assumptions must be made about the proba- bility distribution of the observations or about the parameters of these distributions. The assumptions of normality of distributions (in all tests), homogeneity of variances (in Student’s two-sample t test for means or in ANOVAs involving between-subjects fac- tors), sphericity (in repeated-measures ANOVAs), homoscedas- ticity (in regression analyses), or homogeneity of regression slopes (in ANCOVAs) are well known cases. The data on hand may or may not meet these assumptions and some parametric tests have been devised under alternative assumptions (e.g., Welch’s test for two-sample means, or correction factors for the degrees of free- dom of F statistics from ANOVAs). Most introductory statistics textbooks emphasize that the assumptions underlying statistical tests must be formally tested to guide the choice of a suitable test statistic for the null hypothesis of interest. Although this recom- mendation seems reasonable, serious consequences on SCV arise from following it.

Numerous studies conducted over the past decades have shown that the two-stage approach of testing assumptions first and subse- quently testing the null hypothesis of interest has severe effects on Type-I and Type-II error rates. It may seem at first sight that this is simply the result of cascaded binary decisions each of which has its own Type-I and Type-II error probabilities; yet, this is the result of more complex interactions of Type-I and Type-II error rates that do not have fixed (empirical) probabilities across the cases that end up treated one way or the other according to the outcomes of the preliminary test: The resultant Type-I and Type-II error rates of the conditional test cannot be predicted from those of the prelim- inary and conditioned tests. A thorough analysis of what factors affect the Type-I and Type-II error rates of two-stage approaches is beyond the scope of this paper but readers should be aware that nothing suggests in principle that a two-stage approach might be adequate. The situations that have been more thoroughly stud- ied include preliminary goodness-of-fit tests for normality before conducting a one-sample t test (Easterling and Anderson, 1978; Schucany and Ng, 2006; Rochon and Kieser, 2011), preliminary tests of equality of variances before conducting a two-sample t test for means (Gans, 1981; Moser and Stevens, 1992; Zimmerman, 1996,2004; Hayes and Cai,2007),preliminary tests of both equality of variances and normality preceding two-sample t tests for means (Rasch et al., 2011), or preliminary tests of homoscedasticity before regression analyses (Ca

USEFUL NOTES FOR:

quantitative research study

Introduction

Quantitative research is a way of collecting data about people, places, or things. This can be done through surveys, interviews, or observation. It’s important to know what you want to find out and who you want to study before starting your research so that your results are valid and useful.

The definition of quantitative research

Quantitative research is a type of research that uses numbers to collect and analyze data. It’s used to answer questions that can be answered with a yes or no, such as “Do people like this product?” or “How many customers do we have?”

Quantitative research is different from qualitative research in that it uses quantitative methods (such as surveys) instead of subjective evaluations by interviewers. Quantitative studies don’t ask people about their experiences; rather, they ask them about their opinions on certain things—for example: whether you think your company should offer free shipping for new customers who spend more than $100 per month; whether you would prefer an annual subscription over monthly billing; etcetera…

The results of these surveys will help you determine how many customers value certain features offered by your business (or not).

What does a participant look like?

You will need to recruit participants for your study. Your participants may be volunteers, or they may be paid. A volunteer is someone who joins your research project because he or she wants to help you with it; sometimes this person is referred to as an “active participant.” On the other hand, if you pay someone (or offer him/her money) in exchange for taking part in a particular activity—such as doing some kind of interview or filling out certain questionnaires—then that person has what’s called a “passive participant” role in research projects like yours.

It’s important that you choose how many people should participate in your study: there are two main factors involved here: how much data do we need? And how relevant are those data compared with other sources?

Data collection measures for quantitative studies

How do you know what to study?

How do you collect data?

How do you analyze data?

What are the limitations of quantitative research?

Qualitative vs. quantitative research

Qualitative research is used to understand the meaning of a phenomenon. It involves interviews, surveys and other methods of collecting data. Quantitative research is used to measure phenomena or collect data in order to describe or explain them with statistical methods (e.g., sampling).

Qualitative researchers use qualitative approaches when they want to explore issues related to meaning and understanding; they use quantitative approaches when they want answers about numbers (e.g., sales figures).

It’s important to know what you want to find out and who you want to study before you begin your research to make sure that your results are valid and useful.

When you start your research, it’s important to know what you want to find out and who you want to study before you begin your research. If your goal is to learn more about a certain topic or group of people and their experiences with the product or service being studied, then this information will help guide the questions that should be asked during the qualitative data collection process.

It’s also important for researchers to consider why they are conducting an experiment in the first place—this can help them determine whether or not there are enough answers available from existing sources (e.g., existing surveys) without having another person’s perspective added into the mix by way of quantitative methods (e.g., surveys).

Conclusion

With all of these variables and requirements, it can be pretty overwhelming to begin your own study. Luckily, there is an easy way out: qualitative research! This type of approach will allow you to focus on the participants you want to study and learn more about their lives while eliminating many of the practical problems with quantitative studies.