This chapter is written after the completion of data collection. It is written in past tense. It contains the results of the data analyzed.
This section discusses the content in the chapter
This section analyzes the research findings with particular emphasis on the objectives of the study. The results are presented in quantitative and qualitative analysis where applicable. Tables and graphs are used where applicable to facilitate clarity of the results.
The results presented should be discussed and should be linked to the literature reviewed.
Subheadings should be used.
This section should briefly summarize the major highlights of the study with emphasis on the objectives. The researcher should ensure that all items in the data collection instruments are addressed.
A scale measures the magnitude or quantity of a variable. A variable is a symbol e.g. X or Y that represents any of a specified set of values. There are four types of scales commonly used as levels of measurement.
- Nominal scales allow for qualitative classification. They deal with categorical responses that take on values that are names or labels. E.g. gender is categorized as male or female, ethnicity, marital status, religion etc. The appropriate statistics for nominal data include mode, frequency and chisquare.
- Ordinal scales are similar to nominal variable but it can be ordered in a meaningful sequence. Ordinal data has order but the intervals between the scale points are uneven because of lack of equal distances, arithmetic operations are impossible. However logical explanations can be performed.
- Interval scales deal with interval variables which give better information than ordinal scales. They have an equal distance between each value. E.g. the distance between 1 and 2 is equal to the distance between 99 and 100. Appropriate statistics are the same as the nominal and ordinal scale including mean, standard deviation, correlation, regression, ANOVA.
- Ratio scales measure variables that have the same properties as the interval variables except that with ratio scaling, there is an absolute zero point. E.g. height, weight, length, unsold units etc. All statistics permitted for the interval scale including geometric mean, harmonic mean and logarithms.
There are 3 major criteria for evaluating a measurement tool.
- Validity: includes internal validity which refers to the outcome of the study based on the function of the program. A study has internal validity if the outcome of the study is a function of the approach being tested. It is justified by the conclusion that the researcher has been able to control the threats of other variables i.e. IV, MV or EV.
Internal validity is further classified as:
- Content validity: if the measuring instrument is adequate to cover the topic under study.
- Criterion related validity: reflects the success of measures used for prediction or estimation. A researcher may want to predict an outcome or estimate the existence of a current behavior or condition; these are the predictive and concurrent validity.
- Construct validity: where we consider measurement of abstract characteristics for which no empirical validation seems possible.
- Reliability: is measure is reliable to the degree that it supplies consistent results. Reliability is concerned with accuracy and precision of a measurement procedure.
- Practicability: is concerned with a wide range of factors of economy, convenience and interpretability.
According to Cooper & Schindler (2011), data preparation is conducted using the following methods: editing, coding and data entry. These activities ensure the accuracy of their data and their conversion from raw form to reduced and classified forms that are more appropriate for analysis. The methods are discussed as under:
It is the first step of data analysis which involves detection of errors and omissions correcting them when possible to certify that maximum data quality standards are achieved. The purpose of editing data is to guarantee accuracy, consistency, uniformity, completeness and proper arrangements to simplify coding and tabulation of data. Editing: involves checking raw data to eliminate errors or points of confusion in data. The main purpose of editing is to set quality standards on the raw data. The analysis will then take place with minimum confusion. Editing detects errors and omissions, corrects them when possible. This is to guarantee that the data is accurate, consistent with other information, uniformly entered, complete and arranged to simplify coding and tabulation. There are 2 stages in editing:
To establish whether actual data collection was conducted in the field. For example for interview the approach to check responses to open ended questions might be used to unearth falsification of responses.
According to Mugenda and Mugenda (2012), coding is a system of classifying a variable into categories and assigning different numbers to various classifications to enable quantitative analysis to be conducted for example a variable like occupation would have different classifications for example teacher, nurse, driver etc which would each have a numerical cods like teacher-1,nurse-2,driver-3,clerk-4 etc. Coding: involves assigning numbers or other symbols to answers so that the responses can be grouped into a limited number of classes or categories. The classifying of data into limited categories sacrifices some data detail but is necessary for efficient analysis. For Male or Female, a researcher may use M or F and code 1 for male and 2 for female or use 0 and 1. Coding helps the researcher in reducing several thousand replies into a few categories containing the critical information for analysis. The researcher determines appropriate categories into which responses are placed. Different numerical codes are assigned to each response category. Researchers frequently use summary statistics for presenting findings. These include measures of central tendency (mean, median and mode), measures of dispersion (variance, standard deviation, range, interquartile range), measures of skewness and kurtosis and percentages. They enable generalization about the sample of study objects. Frequency tables, bar charts and pie charts are used in displaying data.
This involves counting the number of responses that fit in each category. Tabulation may be in form of simple tabulation which addresses one variable (e.g. number of cigarettes smoked per day) or cross tabulation that combines variables (e.g. number of cigarettes smoked per day with the age of the respondent). These are used for simple studies. Studies involving large numbers of respondents with many items to be analyzed rely on computer tabulation and computer packages for analysis. Data entry involves converting information gathered by secondary and primary methods to a medium for further manipulation. There is wide variety of ways to enter the data into the computer for analysis. Probably the easiest is to just type the data in directly. In order to ensure a high level of data accuracy, the data analyst should use a procedure called double entry (entering data only once).
First, there is the legitimate DK response from respondents who do not sincerely know the question being asked.
Second, DK responses from respondents who ignore to answer questions or refuse to give the questionnaire the seriousness it deserves may be encountered.
The best way to deal with undesired DK answers is to design better questions at the beginning. Researchers should identify the questions for which a DK response is unsatisfactory and design around it. During interview process, a good rapport should be established between the interviewer and interviewee so that more probing can be done easily so that respondents can provide definite answers. The interviewer may also record verbatim any elaboration by the respondent and pass the problem on to the editor.