Descriptive Studies

The objective of a descriptive study is to learn the; who, what, when, where and how of a topic. The study may be simple or complex; it may be done in many settings. Whatever the form, a descriptive study can be just as demanding of research skills as the causal study, and we should insist on the same high standards for design and execution.

The simplest descriptive study concerns a univariate question or hypothesis in which we ask about, or state about, the size, form distribuion, or existence of a variable. In the account analysis at City Bank, we might be interested in developing a profile of savers. We may want first to locate them in relation to the main office. The question might be, “What percentage of the savers live within a two-mile radius of the office?” Using a hypothesis format, we might predict, “60 percent or more of the savers live within a two-mile radius of the office.” We may also be interested in securing information about other variables:

  • Relative size of accounts
  • Number of accounts for minors
  • Number of accounts opened within the last six months
  • Amount of activity (number of deposits and withdrawals per year) in accounts

Data on each of these variables, by themselves, may have value for management decisions. Bivariate relationships between these or other variables may be of even greater interest. Crosstabulations between the distance from the branch and account activity may suggest that differential rates of activity are related to account owner location. A cross-tabulation of account size and gender of account owner may also show interrelation. Such correlative relationshis do not necessarily imply a causal relationship.

Descriptive studies are often much more complex than this example. One study of savers began as described and then went into much greater depth. Part of the study included an observation of account records that revealed a concentration of nearby savers. Their accounts were typically larger and more active than those whose owners lived at a distance. A sample survey of savers provided information on stages in the family life cycle, attitudes towards savings, family income levels, and other matters. Correlation of this information with known savings data showed that women owned larger accounts. Further investigation suggested that women with larger accounts were often widowed or working single women who were older than the average account holder.

Information about their attitudes and savings practices led to new business strategies at the bank. Some evidence collected suggested causal relationships. The correlation between nearness to the office and the probability of having an account at the office suggested the question, “Why would people who live far from the office have an account there?” In this type of question a hypothesis makes its greatest contribution by pointing out directions that the research might follow. It might be hypothesized that:

  • Distant savers (operationally defined as those with addresses more than two miles from the office) have accounts in the office because they once lived near the office; they were ‘near’ when the account decision was made.
  • Distant saves actually live near the office, but the address on the account is outside the two mile radius; they are ‘near’ but the records do not show this.
  • Distant savers work near the office; they are ‘near’ by virtue of their work location.
  • Distant savers are not normally near the office but responded to a promotion that encouraged savers to bank via computer, this is another form of ‘nearness’ in which this concept is transformed into one of ‘convenience’.

When these hypotheses were tested, it was learned that a substantial portion of the distant savers could be accounted for by hypotheses (a) and (c) conclusion: Location was closely related to saving at a given association. The determination of cause is not so simple however, and these findings still fall within the definition of a descriptive study.

Causal Studies
The correlation between location and probability of account holding at the savings and loan association looks like strong evidence to many, but the researcher with scientific training will argue that correlation is not causation. Who is right? The essence of the disagreement seems to lie in the concept of cause.

The concept of Cause
One writer asserts, “There appears to be an inherent gap between the language of theory and research which can never be bridged in a completely satisfactory way. One thinks in terms of theoretical language that contains notions such as causes, forces, systems, and properties. But one’s tests are made in terms of covariations, operations, and pointer readings. The essential element of causation is that A ‘produces’ B or A ‘forces’ B to occur. But that is an artifact of language, not what happens. Empirically, we can never demonstrate A-B causality with certainty. This is because we do not ‘demonstrate’ such causal linkages deductively or use the
form of validation of premises that deduction requires for conclusiveness. Unlike deductive syllogisms, empirical conclusions are inferences – inductive conclusions. As such, they are probabilistic statements based on what we observe and measure. But we cannot observe and measure all the processes that may account for the A-B relationship.

Previously, we discussed the example of a light failing to go on as the switch was pushed. Having ruled out other causes for the light’s failure, we were left with one inference that was probably but not certainly the cause. To meet the ideal standard of causation would require that one variable always caused another and no other variable had the same causal effect. The method of agreement, proposed by John Stuart Mill in the nineteenth century, states “When two or more cases of a given phenomenon have one and only one condition in common, then that condition may be regarded as the cause (or effect) of the phenomenon. Thus, if we can find Z and only Z in every case where we find C, and no others (A, B, D, or E) are found with Z, then we can conclude that C and Z are causally
related.

Causal Relationships
Our concern in causal analysis is with how one variable affects, or is ‘responsible for’, changes in another variable. The stricter interpretation of causation, found in experimentation, is that some external factor ‘produces’ a change in the dependent variable. In business research, we often find that the cause-effect relationship is less explicit. We are more interested in understanding, explaining, predicting, and controlling relationships between variables than we are in discerning causes.

If we consider the posible relationships that can occur between two variables, we can conclude there are three possibilities. The relationships may be symmetrical, reciprocal, or asymmetrical. A symmetrical relationship is one in which two variables fluctuate together but we assume the changes in either variable are due to changes in the other. Symmetrical conditions are most often found when two variables are alternate indicators of another cause or independent variable. We might conclude that a correlation between low work attendance and active participation in a company camping club is the result of (dependent on) another factor such as a lifestyle preference.

A reciprocal relationship exists when two variables mutually influence or reinforce each other. This could occur if the reading of an advertisement leads to the use of a brand of product. The usage, in turn, sensitizes the person to notice and read more of the advertising of that particular brand. Most research analysts look for an asymmetrical relationship. With these we postulate that
changes in one variable (the independen variable, or IV) are responsible for changes in another variable (the dependent variable, or DV). The identification of the IV and DV is often obvious, but sometimes the choice is not clear. In these latter cases we evaluate them on the basis of (1) the degree to which they may be altered and (2) the time order between them. Since age, social class, climate, world events, and present manufacturing technology are relatively unalterable we normally choose them as independent variables. In addition, when we can detect a time order, we usually find that the IV precedes the DV.

The types of symmetrical relationships are:
1. Stimulus-response relationship. This represents an event or forces that result in a response from some object. A price rise results in fewer unit sales; a change in work rules leads to a higher level of worker output, or a change in government economic policy restricts corporate financial decisions. Experiments usually involve stimulus response relationships.
2. Property-disposition relationship. A property is an enduring characteristic of a subject that does not depend on circumstances for its activation. Age, gender, family status, religious affiliation, ethnic group, and physical condition are personal properties. A disposition is a tendency to respond in a certain way under certain circumstances. Dispositions include attitudes, opinions, habits, values and drives. Examples of property-disposition relationships are the effect of age on attitudes about saving, gender and its effect on attitudes toward social issues, or social class on opinions about taxation. Properties and dispositions are major
concepts used in business and social science research.
3. Disposition-behaviour relationship. Behaviour responses include consumption practices, work performance, interpersonal acts, and other kinds of performance. Examples of relationships between dispositions and behaviour include opinions about a brand and its
purchase, job satisfaction and work output, and moral values and tax cheating.
4. Property-behaviour relationship. Examples include such relationships as the stage of the family life cycle and purchases of furniture, social class and family savings patterns, and age and sports participation. When thinking about possible causal relationships or proposing causal hypotheses, one must state the positional relationship, cause, and effect.

Testing Causal Hypothesis
While no one can be certain that variable A causes variable B to occur, one can gather some evidence that increases the belief A leads to B. We seek three types of evidence:

1. Is there a predicted covariation between A and B? do we find that A and B occur together in a way hypothesized? Or when here is less of A, does one also find more or less of B? when such conditions covariation exist, it is an indication of a possible causal connection.
2. Is the time orde of events moving in the hypothesized direction? Does A occur before B? if we find that B occurs before A, we can have little confidence that A causes B.
3. Is it possible to eliminate other possible causes of B? Can one determine that C, D, and E do not covary with B in a way that suggests possible causal connections?

Causation and Experimental Design
In addition to these three conditions, successful inference making from experimental designs must meet two other requirements. The first is referred to as contrl. All factors with the exception of the independent variable must be held constant and not confounded with another variable that is not part of the study. Second, each person in the study must have an equal chance for exposure to each level of the independent variable. This is random assignment of subjects to groups.

Here is a demonstration of how these factors are used to detect causation. Assume you wish to conduct a survey of a university’s alumni to enlist their support for a new program. There are two different appeals, one largely emotional and the other much more logical in its approach. Before mailing out appeal letters to 50,000 alumni, you decide to conduct an experiment to see whether the emotional or the rational appeal will draw the greater response. You choose a sample of 300 names from the alumni list and divide them into three groups of 100 each. Two of these groups are designated as the experimental groups. One gets the emotional appeal and the other gets the logical appeal. The third group is the control group and it receives no appeal.

Covariation in this case is expressed by the percentage of alumni who respond in the relation to appeal used. Suppose 50 percent of those who receive the emotional appeal respond, while only 35 percent of those receiving the logical appeal respond. Control group members, unaware of the experiment, respond at a 5 percent rate. We would conclude that using the emotional appeal
enhances response probability.

The sequence of events was not a problem. There could be no chance that the alumni support led to sending the letter requesting support. However, have other variable confounded the results? Could some factor other than the appeal have produced the same results? One can anticipate that certain factors are particularly likely to confound the results. One can control some of these to
ensure they do not have this confounding effect. If the question studied is of concern only to alumni who attended the university as undergraduates, those who only attended graduate school are not involved. Thus, you would want to be sure the answers from the latter group did not distort the results. Control would be achieved by excluding graduate students.

A second approach to control uses matching. With alumni, there might be reason to believe that different ratios of support will come from various age groups. To control by matching, we need to be sure the age distribution of alumni is the same in all groups. In a similar way, control could be achieved by matching alumni from engineering, liberal arts, business and other schools. Even after using such controls, however, one cannot match or exclude other possible confounding variables. These are dealt with through random assignment.

Randomization is the basic method by which equivalence between experimental and control groups is determined. Experimental and control groups must be established so that they ae equal. Matching and controlling are useful, but they do not account for all unknowns. It is best to assign subjects either to experimental or to control groups at random (this is not to say haphazardly – randomness must be secured in a carefully controlled fashion according to strict rules of assignment). If the assignments are made randomly, each group should receive its fair share of different factors. The only deviation from this fair share would be that which results from random variation (luck of the draw). The possible impact of these unknown extraneous variables on the dependent variables should also vary at random. The researcher, using tests of statistical significance, can estimate the probable effect of these chance variations on the DV and can then compare this estimated effect of exraneous variation to the actual differences found in
the DV in the experimental and control groups.

We emphasize that random assignment of subjects to experimental and control groups is the basic technique by which the two groups can be made equivalent. Matching and other control forms are supplemental ways of improving the quality of measurement. In a sense, matching and controls reduce the extraneous ‘noise’ in the measurement system and in this way improve the sensitivity of measurement of the hypothesized relationship.

(Visited 119 times, 1 visits today)
Share this:

Written by