Читать книгу Abnormal Psychology - William J. Ray - Страница 128

Null Hypothesis and Inferential Statistics

Оглавление

We usually perform research with a limited number of individuals so that we can infer the behavior of all individuals related to the group we studied. For example, we might study how a group of individuals with depression responded to mindfulness meditation in terms of measures of distress.

What are the odds that the individuals with depression in our group are like all those with depression everywhere? To determine this probability, we use inferential statistics. Technically, we refer to all individuals with depression as the population and the particular participants in our study as the sample. Inferential statistics concerns the relationship between the statistical characteristics of the population and those of the sample.

One way of viewing the inferential process conceptually is to assume that the same experiment was run an infinite number of times, each time with a different sample of individuals chosen from the entire population. If we were to plot the statistics from each experiment, the population of estimates would then represent all the possible outcomes of the experiment.

In more technical language, inferential statistics is used to infer, from a given sample of scores on some measure, the parameters for the set of all possible scores. All possible scores would be that of the population from which our particular sample was drawn. Implicit in this statement is the assumption that the sample we are discussing is the result of random sampling or some systematic form of sampling. That is, each person in the population of all people is equally likely to be included in the sample, with some known probability.

The important thing to remember is that inferential statistics constitutes a set of tools for inferring from a particular sample to larger populations. One way of viewing this conceptually is to ask how the statistics of our sample (that is, the mean and the standard deviation) match what we expect to be the same measures in the larger population.


Inferential statistics might be used to study how a group of individuals with depression respond to mindfulness meditation.

© iStockphoto.com/Tassii

Conceptually, we ask if our control and experimental groups could be considered equal before any treatment (IV) was introduced. That is, would we expect them to be drawn equally from the larger population? One way in which we seek to make our experimental and control groups equal is through random assignment to the groups. A critical question that is asked in terms of empirically supported treatments (see Chapter 1) is whether the participants were randomly assigned to the treatment condition. If so, then we take the support for a particular treatment as being more valid.

probability: the likelihood that a set of results in an experiment differed from what would be expected by chance

inferential statistics: a method of analysis that concerns the relationship between the statistical characteristics of the population and those of the experimental sample

sample: participants in a study

Given a group of potential subjects, we could expect some of them to be motivated to be part of the experiment, others to be tired, some to be more intelligent, some to have faster reaction times than others, and so forth. By randomly assigning these individuals to groups, we would expect to make the two groups equal.

Another part of the statistical treatment of the null hypothesis is related to probability. If you were to toss a coin a large number of times, you would expect to have an equal number of heads and tails. The idea of no differences forms the basis of the null hypothesis, which was developed by Sir Ronald Fisher (1935). He sought to determine whether a set of results differed from what would be expected.

What we need, of course, is a technique for determining if a set of results is different from what would be expected. One of the common statistical techniques used for this is called the t test. It was actually developed near the beginning of the last century by William Gosset, who worked for the Guinness Brewery in Dublin. Gosset wanted a way of knowing if all the batches of beer were the same. In this case, he actually wanted the null hypothesis to be true. Fisher developed the F test, which is conceptually similar to the t test. In fact, mathematically, t2 = F.

In our experiment, we can think of the t test or F test, asking the question of what is the difference in reaction time between the experimental and control groups. The larger the difference, the more certain we can be that the IV had an effect. In the end, we are never fully certain that our results are or are not due to chance. Instead, we use statistics to help us to make a best guess by assigning a probability to the statement that our results are not due to chance alone. That is, we may say that results from our study could have happened by chance less than 1 time out of every 100. Said in other words, if we ran the same study 100 times, each with a different set of subjects drawn from the total population, what are the odds we would not obtain the same results?

Abnormal Psychology

Подняться наверх