26 July 2011
AA3RESEARCH METHODOLOGY
SEMESTER-I
Teaching Notes
Unit-1: Aim and Scope of Research in Education
Research-meaning, importance and characteristics
1. Define Educational Research?
Research is an inquiry process that has clearly defined parameters and has as its aim the discovery of creation of knowledge, or theory building; testing, confirmation, revision, refutation of knowledge and theory; and /or investigation of a problem for local decision- making.
C.r.McClure and P.Hersson, 1991,p.3
Scientific research is a systematic, controlled, empirical and critical investigation of hypothetical propositions about the presumed relations among natural phenomena.
F,N.Kerlinger, 1978, p.11
Research is a planned, systematic and caution endeavor undertaken to find out something new or to deepen or to better our understanding about something that we have already known. The ‘new’ in our finding may be a new phenomenon (and knowledge about it) or uses or applications of a phenomenon (or knowledge) or solutions to problems that we face. A better (more correct) understanding will take us nearer to reality and lead to better, wider and more forms of applicability of the phenomenon(knowledge)being researched.
Research is a continuous endeavor. Its findings are based on probabilities. They are acceptable rationally within the boundaries of the School of philosophy and empirically with reference to the evidences collected. When rationality becomes more comprehensive and evidences support a higher level of facts, research advances further towards reality. Thus research is a continuous progressive process. It is performed to identify, understand and apply the naturally occurring physical or socially constructed phenomena. It generates theory, promotes productive applications, and improves practices; it may lead to discoveries and/or applications.
______________________________________________________________________________
Dr.K.Chellamani, 26 July 2011
2. Write the importance of educational research.(Need)
Research in education is essential for the development of education as a science and for improving the system of education. Precisely, it is needed for (i) investigating the nature of the children, (ii) investigating the learning process and motivation, (iii) developing curricula, (iv) developing better learning materials and procedures,(v) refining teaching procedures, (vii) improving management, (viii) predicting '‘academic success’, (ix) investigating the causes of undesirable phenomena in education,(x) evaluating effectiveness of different educational ‘treatments’, and so on.
Education has strong roots in the fields like philosophy, psychology, sociology, economies and history. It is through an intensive process of scientific enquiry about the impact of these disciplines on various aspects of education that sound theories of education can be established.
Sound education is recognized as basic towards the development of individual, social, and national development. It helps in improving policies, systems and practices of the government. Educational practitioners are always in search of effective methods of instruction, more reliable techniques of evaluation, richer learning materials, more efficient systems of management, etc.
Decisions based on systematic research in education save time, money, and energy, and reduce failure, and show us the path of progress. It is because they are scientifically tested, and their worth
well-established.
Research in education is assuming greater importance and urgency due to the rapid expansion and democratization of education throughout the world. It becomes the right of every individual in India. It results in sensitization of innumerable problems like problems of expansion, inclusion, buildings, finance, strategies, individual differences, media and materials, managementetc. Extensive quality research is needed to solve all these problems.
Problems also arise in the phenomenon like academic achievement, discipline, wastage and stagnation, truancy, mass copying in examinations, etc. This calls for investigations and remedial measures.
In the world of Technology, there comes a device every day. A lot of research is needed for its effective usage, development of efficiency among people, and productivity.
And above all, the concept of education changes every day. According to it, the education system has to be modified, for which research is needed.
______________________________________________________________________________
Dr.K.Chellamani, 26 July 2011
3. List the main characteristics of educational research:
On examining the definitions, the characteristic of educational research goes as-
Research is an inquiry or investigation; It has clearly defined parameters;
It describes the problems; It explains the problems; It generalizes the findings;
It predicts the phenomena; It is a critical inquiry; It is a systematic inquiry; It is time-bound;
It should satisfy the external and internal validity; It is objective and transferable;
It is a controlled inquiry; and It is an empirical inquiry.
4. What are the AIMS of educational research?
• Finding truth: Research is a search for truth.
• Testing Hypothesis(es): Whenever a research worker is confronted with a probable relation known as hypothesis(es), he/she then, through scientific process, tests the hypothesis(es).
• Establishing relationship(s): Many a time researches are undertaken to study the relationship between some variables, for example relationship between the socio-economic status of parents and the achievement of their off-springs.
• Arriving at their generalization or theory building: On the basis of research findings, generalizations are arrived at or theories are built, like theories of learning.
• Verifications and testing of existing facts, theories, generalizations, etc.(Improvement of knowledge): Research findings also help in verifying the existing theories or generalizations arrived at by other researchers.
• Revision of existing theories: The existing theories undergo revision; whenever contracting evidences come to light.
• Finding solutions to problems/questions: Research is also a problem solving activity in a systematic way.
• Providing a scientific basis for planning, decision making, policy formulation, policy revision, etc.: The database generated through research activity is helpful in the decision-making and planning process.
• Predicting behaviors, forecasting outcomes/events on the basis of research, viz., population projections, number of teachers and schools needed in the next decade, etc.
________________________________________________________________________
Dr.K.Chellamani, 29 July 2011 Teaching Notes
5. Give an account of the Areas of Educational research.
Research is a highly purposeful and systematically conducted activity. Every piece of research has its own clearly defined objectives. It is to answer certain questions or solve certain problems. It leads to confirmation or discovery of new relationships, principles, other theoretical propositions and they may have their implications and applications. Thus research generates theory, promotes productive applications, and improves practices; it may lead to discoveries and/or applications.
Research in Educationis essential for the development of education as a science and for improving the process/system of education. Research is much needed for (1) investigating the nature of children, (2) investigating the learning process and motivation, (3) developing curricula,(4)developing better learning materials and procedures, (5) refining teaching procedures,(6) refining evaluation procedures, (7) improving management, (8) predicting academic success,(9) investing the causes of undesirable phenomena in education, (10) evaluating effectiveness of different educational ‘treatments’, and so on.
There are certain other considerations that emphasize the need for and importance of research in education and they are as follows.
Education has strong roots in the fields like philosophy, psychology, sociology, economics and history. There is an impact of these disciplines on education. An intensive scientific enquiry on its impact may establish sound theories of Education.
Sound education is basic for individual, social and national development. Therefore research is required to improve educational policies, systems and practices. Educational practitioners are constantly in search of effective methods of instruction, more reliable techniques of evaluation, richer learning materials, more efficient systems of management, etc. There is dignity on the part of educational practitioners in using research proved practices.
The rapid expansion and democratization of education throughout the world necessitates research in education. Since the right of every individual to fulfill development through education has been recognized everywhere, every country is aiming at providing universal education to its citizens in the shortest possible period of time. Consequently, a number of educational problems- problems of expansion, buildings, finance, strategies, individual differences, media and materials, management, etc have arisen.. For successful solutions, extensive research of quality is needed.
Many undesirable phenemenoa like poor academic achievement, indiscipline, wastage and stagnation, truance, mass copying in examinations, etc, are observed in the field of education. Investigations of these problems with a view to devising remedial measures is badly needed.
The expansion and innovation in technology calls for research for its appropriate usage and improve education. It is required to improve quality of education, and also to economise effort and increase efficiency.
There is also need for research because of the changing conception of education reflecting in the commission reports. “Indeed, in the 21st century everyone will be required to demonstrate independence, judgment and more personal responsibility if common objectives are to be reached. Our report also underlines another requirement, namely that none of the talents lying dormant like hidden treasures in every individual should be allowed to go unused. These talents, to name but a few, include: memory, logical thought, imagination, physical ability, an aesthetic sense, the ability to communicate and the natural charisma of a group leader. In actual fact these abilities only serve in underlining the importance of more self-knowledge (...).The "information revolution" is increasing ease of access to information and facts. This means that education should be striving to make it possible for everyone to collect and single out relevant information, to put it in context, to handle information and to use it. To this end, then, education should seek to adjust itself constantly to the changes going on in society; it must be able to pass on the acquisitions, the foundations and the richness of human experience."
Dr.K.Chellamani, 30 August 2011 Teaching Notes
M.Ed. Teaching Notes
What is Qualitative and Quantitative Research?
Since ages, man tend to have grips with his environment and to understand the nature of the phenomena it presents to his senses. The means by which he sets out to achieve these ends may be classified into three broad categories: experience, reasoning and research( Mouley, 1978). Unlike the other two, scientists construct their theories carefully and systematically out of their research. The hypotheses they formulate are being tested empirically so that their explanations have a firm basis. They are explanations based on the data collected. Beyond the veracity of the data, there is a question of its significance. Here comes qualitative research. It answers the 'what' and 'why' of the problem. Creswell's(1998) list of qualitative research is named as “five traditions”. Those traditions are, in alphabetical order: biography, case study, ethnography, grounded theory, and phenomenology. The list can be added with other researches such as evaluation, action research and critical theoretical analysis.
Shank (2002) distinguishes qualitative research with quantitative research with a metaphor. Quantitative research was likened to the metaphor of the window. Windows are used to give us a clear and transparent look at things. When researchers are trying to be clear and transparent, such issues as reliability, validity, bias, and “generalizability” rise to the surface. Windows are most useful in giving us clear, typical, broad, accurate and realistic views of things in the empirical world. Qualitative research, on the other hand, was likened to the metaphor of the lantern. Lanterns are used to allow light to illuminate dark areas so that we can see things that previously were obscure. Once we shed light on things, we understand them better. Considering the above explanations, we take up the following as the essential criteria for a researcher to abide by. (1) depth in investigation, (2) adequate interpretation, (3) fertile illumination, and (4) accountability in participation.
Investigation is the duty of the researchers. They must be aware of the fact that any phenomena in the empirical world has much remained below the current awareness. The investigator has to go below the surface to discern matters and issues that only reveal themselves under careful scrutiny i.e. the depth he has to go in. That is, the researchers are not to be contented with familiar perspectives and preconceptions. Once they are convinced that they have gone into the areas previously unaddressed with enough depth to the notice of the readers, then comes the issue of adequate interpretation. The investigator has the responsibility of forming more complete and complex understanding of those things with an intelligible and manageable grasp of the phenomena under examination. Once the researchers have investigated with depth and created interpretative understanding for newer insights, the next question is ' how to use it?'. Hence the researcher must illuminate the new ways of looking and the new ways of practicing it. Researchers participate in a variety of procedures, and they must be accountable for any and all forms of participation. Researchers must document these efforts in order to see how they have interacted and what some of the anticipated and unanticipated fruits of such anticipation might be.
The logic of empirical research must be consistent for qualitative and quantitative research. Quantitative research differs from the qualitative in its clear framework. Whereas qualitative research has a mixed method. The hypotheses testing in the quantitative research may be more powerful when comparing to the description and narration of the qualitative research. That does not mean that the mixed-method studies are not important and that should not be done. The important fact is they are actually a part of quantitative research to complement the hypothesis testing. The flexibility in the qualitative research is replaced by an array of systematic techniques in quantitative research. It has rigorous design of its own. There is identification and control of explicit variables. Whereas the qualitative research do not allow too much preplanning and this may diminish the power and potential usefulness of the study. There is a belief that all elements in a qualitative research should be coded. But if the researcher is not careful, too much data become a threat for minute details. Coding is for laying out key issues. They have the informed freedom to sort through their data, looking for the important details that help in explaining the phenomena in a newer way. They are more concerned with tactical and flexible exploration rather than with creating reductive and scientific coding and thematic structures. They may be able to address these issues as they arose in practice.
The above discussion leads us to follow the conditions listed below. (1) Careful examination of research assumptions. The researcher must be aware of the implicit assumptions. The qualitative researchers are to make the strange familiar and even prepare to make the familiar strange. They must move into those dark and unexamined areas. This cannot be possible by adhering to old and established ways of doing things. (2) Researchers must be ready to change their directions and must convey later why and how the change was made. (3) Education is a lively process. Hence the research process - the research design and the presentation of report should be lively. Then that research will live forever.
Validity and Reliability
Validity: It is an important factor in a research. If a piece of research is invalid then it is worthless. So it is the worthiness of a research. It is the requirement for both qualitative and quantitative research. It is defined as it measures what it purports to measure. In case of qualitative research, it is addressed through the honesty, depth, richness and scope of the data achieved, the participants approached, the extent of triangulation and the objectivity of the researcher. In quantitative research, data validity may be improved through careful sampling, appropriate instrumentation and appropriate statistical treatment of the data. It is impossible for hundred percentage validity. Quantitative research possesses a measure of standard error which is inbuilt and which has to be acknowledged. In qualitative research there is a degree of bias in the subjectivity of the respondents, their opinions, attitudes and perspectives. Therefore validity should be seen as a matter of degree rather than as an absolute state (Gronlund, 1981). Hence the researcher should strive to minimize invalidity and maximize validity.
There are several kinds of validity and here we see the four important validities. They are, content validity, construct validity, concurrent validity and face validity. In fact validity is the touchstone of all types of educational research, shown in generalizability, replicaplity and controllability. Maxwell, echoing Mishler (1990) suggests that 'understanding' is more a suitable term than 'validity' in qualitative research. It means validity attaches to accounts, not to data or methods (Hammersley and Atkinson, 1983); it is the meaning that subjects give to data and inferences drawn from the data that are important. 'Fidelity”( Blummenfeld-Jones, 1995) requires the researcher to be as honest as possible to the self-reporting of the researched. Data selected must be representative of the sample, the whole data set, the field i.e., they must address content, construct and concurrent validity.
Researchers go for instruments to obtain information for solving the research problem in hand. If a teacher wants to know about the impact of the new curriculum on students’ achievement, he needs both an instrument to record the data and some sort of assurance that the information obtained will enable him to draw correct conclusions. The drawing of correct conclusions based on the data obtained by use of an instrument is what validity is about. Validity can be defined as referring to the appropriateness, meaningfulness and usefulness of the specific inferences researchers make based on the data they collect. The researcher has to validate an instrument before he takes it for testing. It is the degree to which evidence supports any inferences a researcher makes based on the data he collects using a particular instrument. It is not validating the instruments but rather the inferences about the specific uses of an instrument itself. These inferences should be appropriate, meaningful, and useful.
Appropriateness refer to relevance i.e, to the purposes of the study. If the purpose of the study is to know about students understanding on the spelling rules, it would make no sense to make inferences about this from their scores on a test about the grammar in English.
Meaningfulness says about the meaning of the information( such as test scores) collected through the use of an instrument. The researcher has to see what exactly a high score on a particular test mean; what does that score tell us about the person who receives that score; and in what way this person differs from a person who receives a low score. These information are collected from people. These are warranted conclusions about the people on whom the data were collected.
Usefulness of the instrument helps researchers to make decisions related to what they were trying to to find out. Researchers interested in knowing the dropout rates, for example, need information that will enable them to infer the cause behind it.
Validity therefore depends on the amount and type of evidence the researcher has on which he makes interpretations. The following are the four types of validity a researcher has to establish before administrating a tool.
Content validity:
This is the first key a researcher has to attend when he develops an instrument for his study. The instrument must cover the domain or items that it purports to measure. If the domain has a broad width, a fair representation of the wider issue can be taken. Care should be given to sampling of items with representativeness. That is the elements of the main issue to be covered in that particular research are both a fair representation of the wider research and the elements chosen are addressed in depth and breath. For example, if a researcher wanted to find out a section of students' ability in spelling the words in English around 1000. Due to certain factors, he decided to take only 50 words for testing. Then that test should ensure representativeness of the 1000 words, may be in terms of spelling rules. It refers to the content and format of the instrument. How appropriate is the content? How comprehensive it is? How adequate are the items to represent the content to be assessed? The content and the format must relate to the definition of the variable and the sample of the subjects to be measured.
Construct validity:
A construct is an abstract. It is on the operationalization of the tool. Every item should mean what the researcher proposed to assess. That is the articulation of the construct. In the above example, the test should focus only on the spelling rules and nothing other than this. If the tool is on testing intelligence, it should be on that particular intelligence what the researcher expected the student to demonstrate. To establish construct validity, the researcher must see,
– whether his construction of a particular issue agree with other constructions of the same particular issue( it could be done through correlations with other measures of the issue)
– the theory of what that construct is and its constituent element is
– also counter examples which might falsify his construction.
This exercise of confirming and refuting evidence allow the researcher to stand clearly and acknowledge conflicts if any.
In qualitative research, construct validity must demonstrate that the categories that the researchers are using are meaningful to the participants themselves (Eisenhart and Howe, 1992:648)i.e., that they reflect the way in which the participants actually experience and construe the situations in the research; that they see the situations through the actors' eyes. Campbell and Fiske(1959) and Brock-Utne (1996) suggest that convergent validity implies that the different methods for researching the same construct should give a relatively high inter correlation, whilst discriminant validity suggests that using similar methods for measuring different constructs should yield relatively low inter correlations.
Construct validity is to find out - How well does this construct explain differences in the behavior of individuals or the performance on certain tasks. Hence the articulation of the construct is very important. If the researcher wants to assess the aptitude of students, he should look into the acceptable, expected ability on that particular component at that particular age. He should know the theory of what that construct is and its constituent element. Then going for evidences to confirm and refute help him in establishing construct validity.
Criterion-related validity:
It is to relate the results of one particular instrument to another external criterion. It has two principal forms: Predictive validity and Concurrent validity. Predictive validity is the validity of any particular data which proved to be the same in the future data of the same sample. For example, the entrance test examination results have its reflection in their performance in their professional course performance. Concurrent validity goes with the meaning of concurrence. That is, the data gathered from one instrument must correlates highly with data gathered from using another instrument.
Both are similar in its core concept i.e, agreement with a second measure. They differ in one point, where the absence of time element in the former. Concurrence can be demonstrated simultaneously with another instrument. It is to see how strong is the relationship and how well do such scores estimate or predict future performance of a certain type?
Ensuring validity:
It is very important to maintain validity at every stage of research. The researcher has to take up confidence in building validity in the elements of the research plan, data acquisition, data processing analysis, interpretation and its ensuing judgment. Louis Cohen, Lawrence Manion and Keith Morrison(2000) have given a detailed account of minimizing threats to validity at every stage of investigation. It is given below.
At the design stage:
✗ choosing an appropriate time scale;
✗ ensuring that there are adequate resources for the required research to be undertaken;
✗ selecting an appropriate methodology for answering the research questions;
✗ selecting appropriate instrumentation for gathering the type of data required;
✗ using an appropriate sample(e.g. One which is representative, not too small or too large);
✗ demonstrating internal, external, content, concurrent and construct validity; 'operation alizing', the constructs fairly;
✗ ensuring reliability in terms of stability(consistency, equivalence, split-half analysis of test material);
✗ selecting appropriate foci to answer the research questions;
✗ devising and using appropriate instruments (for example to catch accurate, representative, relevance and comprehensive data(King, Morris and Fitz-Gibbon, 1987); ensuring that readability levels are appropriate; avoiding any ambiguity of instructions, terms and questions; using instruments that will catch the complexity of issues; avoiding leading questions; ensuring that the level of test is appropriate- neither too easy nor too difficult; avoiding test items with little discriminability; avoiding making the instruments too short or too long; avoiding too many or too few items of each issue;
✗ avoiding a bias choice of the researcher or research items(insiders or outsiders as researchers).
At the stage of data gathering:
✗ reducing the Hawthorne effect;
✗ minimizing reactivity effects (respondents behaving differently when subjected to scrutiny or being placed in new situations, for example in the interview situations- we distort people's lives in the way we go about studying them (Lave and Kvale, 1995:226));
✗ trying to avoid dropout rates among respondents;
✗ taking steps to avoid non-return of questionnaires;
✗ avoiding having too long or too short an interval between pre-tests and post-tests;
✗ ensuring inter-rater reliability;
✗ matching experimental and control groups fairly;
✗ ensuring standardized procedures for gathering data or for administrating tests;
✗ building on the motivations of the respondents
✗ tailoring the instruments to the concentration span of the respondents and adressing other situational factors(e.g. Health, environment, noise, distration and threat);
✗ addressing factors concerning the researcher(particularly in a interview situation); for example the attitude, gender, race, age, personality, dress, comments, replies, questioning technique, behavior, style and non-verbal communication of the researcher.
At the stage of Data analysis:
✗ Using respondent validation;
✗ avoiding subjective interpretation of data(e.g. Being too generous or too ungenerous in the award of marks),i.e. Lack of standardization and moderation of results;
✗ reducing the halo effect, where the researcher's knowledge of the person or knowledge of other data about the person or situation exerts an influence on subsequent judgments;
✗ using appropriate statistical treatments for the level of data (avoiding applying techniques from interval scaling to ordinal data or using incorrect statistics for the size, type and complexity, sensitivity of data);
✗ recognizing spurious correlations and extraneous factors which may be affecting the data(i.e. Tunnel vision);
✗ avoiding poor coding of qualitative data;
✗ avoiding making inferences and generalizations beyond the capability of datato support such statements;
✗ avoiding the equating of correlations and causes;
✗ avoiding selective use of data;
✗ avoiding unfair aggregation of data(particularly of frequency tables);
✗ avoiding unfair telescoping of data(degrading the data);
✗ avoiding Type I and/or Type II errors.
At the stage of data reporting:
✗ avoiding using data very selectively and unrepresentativly (for example accentuating the positive and neglecting or ignoring the negative);
✗ indicating the context and parameters of the research in the data collection and treatment, the degree of confidence which can be placed in the results, the degree of context-freedom or context-boundaries of the data)i.e. The level to which the results can be generalized);
✗ presenting the data without mis-representing their message;
✗ making claims which are sustainable by the data;
✗ avoiding inaccurate or wrong reporting of data (i.e. technical errors or orthographic errors);
✗ ensuring that the research questions are answered; releasing the results neither too soon nor too late.
Having the understanding over the areas of invalidity, the researcher can take steps to ensure the minimization of errors to the maximum.
Reliability
It refers to the consistency of the scores obtained. That is, how consistent they are for each individual from one administration of an instrument to another, from one set of items to another. For example in a test to measure the word power of the students- If the test is reliable, we would expect a student who receives a high score the first time he takes the test to receive a high score the next time he takes the test. The scores obtained from an instrument can be quite reliable, but not valid. Suppose a researcher gave two forms of a test on Psychology to a group of B.Ed students and found their scores to be consistent. Those who scored high on Part A also scored high on Part B; and those who scored low on Part A scored low on Part B; and so on. Here the scores are reliable. Then, if the researcher used the same test scores to predict the success of the students in Language classes, he would be surprised to see that there is no relation. Any inferences about success in Language based on scores on psychology would have no validity. It is understood that the scores which are inconsistent for a person are not valid and hence they do not provide any useful information.
Errors of Measurement:
In the case of giving the same test twice to the sample, the scores or answers may not be identical. Probably, it is due to a variety of factors (differences in motivation, energy, anxiety, a different testing situation, and so on), but it is inevitable. Such factors result in errors of measurement. Since errors of measurement are always present to some degree, researchers expect some variation in test scores ( in answers or ratings, for example) when an instrument is administered to the same group more than once, when two different forms of an instrument are used, or even from one part of an instrument to another.. Reliability estimates provide researchers with an idea of how much variation to expect. Such estimates are usually expressed as another application of the correlation coefficient known as a reliability coefficient.
A validity coefficient expresses the relationship that exists between scores of the same individuals on two different instruments. A reliability coefficient also expresses a relationship, but this time it is between scores of the same individuals on the same instrument at two different times, or between two parts of the same instrument. The three best known ways to obtain a reliability coefficient are the test-retest method; the equivalent-forms method; and the internal consistency methods. Unlike other uses of the correlation coefficient, reliability coefficients must range from 0.00 to 1.00. .(Fraenkel,1993).
Test-Retest Method
It is the administration of the same test twice to the same group after a certain time interval has elapsed. In order to calculate the reliability coefficient, the relationship between the two sets of scores is obtained. Here we have to look into the time factor. The longer the time interval, the lower the reliability coefficient as there is a greater likelihood of changes in the individuals taking the test. To check the evidence of test=retest validity, an appropriate time interval should be selected. This interval should be that during which individuals would be assumed to retain their relative position in a meaningful group.
It is not possible to study a variable that has no stability. It is natural that some abilities (such as writing) are more subject to change than others (such as abstract reasoning). At the same time some personal characteristics such as self-esteem are considered to be more stable than others (such as teen age interests). Scientists say, for educational research, stability of scores over a two to three month period is usually viewed as sufficient evidence of test-retest reliability coefficients, therefore, the time interval between the two testing should always be reported.
Equivalent- forms Method
It is the administration of two different but equivalent or alternative or parallel forms of an instrument to the same group of individuals. The questions are not the same but they sample the same content and they are constructed separately from each other. A reliability coefficient is then calculated between the 2 sets of scores obtained. A high coefficient would indicate strong evidence of reliability- that the two forms are measuring the same thing.
It is possible to combine the test-retest and equivalent forms methods by giving two different forms of the same test with a time interval between the two administrations. A high reliability coefficient would indicate not only the two forms are measuring the same sort of performance but also what we might expect with regard to consistency over time.
Internal-consistency methods
The above methods involve administration of two tests or testing two sessions. There are methods which show internal consistency, estimating reliability, by administrating a single instrument. The two important methods are Split-Half procedure and Kuder-Richardson approach.
Split-Half procedure
It involves scoring of items into two different halves, viz. odd and even. Every individual’s score will be taken in these halves and then applied correlation coefficient for two set of scores. The coefficient indicates the degree to which the two halves of the test provide the same results, and hence describes the internal consistency of the test. The Spearman-Brown prophecy formula is used to calculate the reliability coefficient. The formula goes as follows:
Reliability of Scores on total test = 2 x reliability for ½ test
________________________
1 + reliability for ½ test
Hence a correlation of coefficient of 0.56 is obtained by comparing one half of the test items to the other half, the reliability of scores for the total test would be:
Reliability of Scores on total test = 2 x 0.56 1.12
_________ = ________=0.72
1 + 0.56 1.56
The above illustration implies an important characteristic of reliability. The reliability of a test (or any instrument) can generally be increased by increasing its length if the items added are similar to the original ones.
Kuder-Richardson approach
It is the most frequently employed method for determining internal consistency. It has two formulae – KR20 and KR21. If the items are of equal difficulty KR21 can be used.( The formula KR20 does not require the assumption that all items are equal difficulty.)It requires only three pieces of information, the number of items in the test, the mean, and the standard deviation.
KR21 reliability coefficient = K
_______ [1- __M(K-M)_____
K-1 K(SD)
Here K refers to the number of items in the test, M stands for the set of test scores, and SD stands for standard deviation of the set of test scores.
Alpha Coefficient
Alpha Coefficient is frequently called as Cronbach alpha (after the man who developed it). This coefficient is a general form of the KR20 formula to be used in calculating the reliability of items that are not scored right versus wrong, as in some essay tests where more than one answer is possible.
Scoring Pattern
Generally, instruments are administered with specific directions and are scored objectively. There may not be any room for his/her judgment, though there may be differences between scorers to certain extent. This is not the case with essay evaluations. Instruments that use direct observations are vulnerable to observer differences. Such instruments go with the degree of scoring agreement.
INSTRUMENTATION
Researches are based on collection of data. The findings and conclusions are based on what the data reveal. Hence researchers should take are on the kinds of data to be collected, the methods of collection to be used and the scoring of the data. Let us see what data means.
Data refers to the the kinds of information researchers obtain on the subjects,viz.-
✔ Demographic particulars
✔ scores of test conducted
✔ responses in the oral interview
✔ written responses to the a survey questionnaires
✔ essays written by students
✔ performance reports
✔ anecdotal records and so on.
The whole process of collecting data is called instrumentation. It involves selection of items, design of the instruments and the conditions under which the instrument will be administered. Moreover the location of the data collection, the time of collection, the frequency of administration and the person to administer. The above fats are important so as to make the data reliable and useful. Thereby the researcher can draw accurate conclusions about the characteristics of the people being studied.
Validity, Reliability,Objectivity and Usability
A validity of an instrument means that it measures what it purports to measure. It is in the hands of the researchers how they present their inferences from the data collected through the instrument they have used. Therefore they go for the instrument which help them to draw valid conclusions about the characteristics of the subjects they study. For example to measure the reading ability of the subjects the instrument should measure such achievement.
Reliability is the other important factor to be taken care of. It is on the consistency of the results. If that researcher tested the reading abilityof that particular subjects at two or more different times, he should get almost the same result every time. Then that tool has reliability.
Objectivity refers to the absence of subjective judgments. The researcher has to avoid subjectivity, when gives judgments on the performance or the characteristic of the subjects.
Usability is how easy it will be to use that particular instrument which he has developed; how long will it take to administer ; are the directions indicated clearly; is it appropriate for the group to whom it will be administered; how easy the scoring pattern is; how easy it is to interpret; does it have equivalent forms; does it have evidence of reliability and validity and the like. If a researcher attends all the above questions carefully, he can do justice to the research at the end.
Means of Data collection instruments
Once the instrument is ready, now comes the question of obtaining information. It could be the researcher himself without getting any assistance, or directly from the subjects or from other informants.
1) Researcher instruments:
Eg. Rating scales, Interview schedules, Tally sheets, Flow charts, Performance check lists, Anecdotal records and Time and motion logs
2) Subject completes:
Eg. Questionnaires, Self-check lists, attitude scales, Personality or character inventories, achievement/ Aptitude tests, Performance tests, Projective devices and socio-metric devices
3) Informer instruments:
Eg.Rating scale, anecdotal records and Interview schedule
( Classification of instruments can be 1. Written-response-type instruments, 2. Performance-type-instrument)
Source of the instrument:
Either by using the previously existing instrument of some sort or by developing on his own.
Unobtrusive measures:
It refers to data collection procedures that involve no intrusion into the naturally occurring course f events. It may be in the form of record keeping.
Types of scores:
Raw scores and Derived scores
Norm-referenced and Criterion-referenced Instruments
Measurement scales( Nominal, ordinal,Interval scales, Ratio scales)
____________
Research Methods
Historical Research:
The Purpose:
1. To aware past failures and success
2. Learn how things were done/ application
3. To assist in prediction (Policy Makers).
4. To test hypo concerning relate or trends. Eg. Secondary school teachers have enjoyed greater prestige than elementary school teachers since 1940.
5. To understand present educational practice and policies more fully.
Steps:
1. Defining the problem or question to be investigated.
2. Locality relevant sources of historical information.
3. Summarizing and evaluating the information obtained from theses sources.
4. Presenting and interpreting the information as it realated to the problem or question the originated study.
Defining the problem:
Describe, clarify and explain and sometimes to correct (as when a researcher finds previous accounts of an action or event to be in error). Eg. Better to study a well – defined problem in depth i.e. Perhaps more narrow than one told like than to pursue a more broadly stated problem then cannot be sharply defined or fully resolved.
Titles:
Eg:
• The decline in age at leaving home, 170 – 2000.
• Annadurai and his speeches.
• Kamaraj and his political principles.
• Women professionals before independence.
Locating relevant Resources:
Categories of Sources:
Documents, numerical records, oral statements and records and relics, songs, stories and legends.
A relic is any object whose physical or visual character is can provide some information about the past. Eg. Furniture, art work, clothing, buildings, monuments or equipment.
Primary Vs Secondary Sources:
Primary Source: Who was a participant in or a direct witness to the event.
Secondary Source: A document prepared by an individual who was not a direct witness to an event but who obtained the description of the event from someone else.
Eg: A newspaper editorial commentary on a Teacher’s speech.
Summarizing information Edward J. Carr (Historian)
Evaluation of his
Key questions for any historical researches
• Was this document really written by the supposed author? (i.e. is it genuine?) [External Criticism]
• Is the information contained in the document true? (i.e. is it accurate?) [Internal Criticism]
External Criticism: (Nature and authenticity of the document itself)
• Who wrote
• For which purpose
• When was it written?
• Where was it written?
• Under what condition
• Do different forms or versions of the document exist?
Internal Criticism: (What the document says)
These are the data presented (Attendance records, Budjet features, test scores and so on) reasonable?
With regarding to the author of the document:
• Was the author present at the event or she is describing?
• Was the author a participant in or an observer of the event?
• Was the author competent to describe the event (refer to the qualification of the author)
• Was the author emotionally involved in the event?
• Did the author have any vested interest in the outcomes of the event?
With regarding to the contents of the document:
• Do the contents make sense?
• Could the event described have occurred at that time?
• Would people have behaved as described? (A major danger is – presentism – describing present day beliefs, values and ideas to people who lived at another time – a historical hindsight)
• Does the language of the document suggest a bias of any sort? (Emotionally charged, intemperate of the author)
• Do other versions of the event exist?
Generalization in Historical researches:
- Rarely able to study an entire population of individuals or events.
- A sample of the phenomena of interest.
Disadvantages:
1. No control over threats.
2. Limitations are severe (document analysis).
3. Cannot ensure representationer’s analysis.
4. nor can they check validity and reliability of the inferences made from the data available.
5. The possibilities of researcher’s bias exist.
6. Any observed relationships are due to subject, character, implementation, history, maturation, attention or location.
7. Depends mostly on the skill and integrity of the researcher – since methodological controls are unavailable – it is the different research method.
Case Study Analysis
• Case study analysis can penetrate situations in ways that are not always susceptible to numerical analysis.
• It establishes cause and effect. The strength in that they observe the effects in real contexts, recognizing that contact in a profile determinant of both causes and effects.
• Case study approach is valuable when the researcher has little control over events.
Hallmarks:
• Concerned as a rich and vivid description of events relevant to the case.
• Provides a chronological narrative of events to the case.
• Blends a description of events and analysis of them.
• Focuses on individual actors or group of actors and seeks to understand their perception of events.
• It highlights specific events that are relevant to the case.
• The researcher is integrally involved in the case.
• An attempt is made to portray the richness of the case in writing up the reports.
• It can make theoretical statements but like other researches must be supported by the evidence.
Types: (Yin, 1984)
a. Exploratory (as a pilot to other studies or research of rations) eg. Discovery Channel
b. Descriptive (Providing narrative accounts)
c. Explanatory (Testing theories)
a can be used to generate hypotheses that are tested in large scale surveys, experiments or other forms of research. Eg. Observational. The important of study to be faced in undertaking care studies. According to Adelmen et al, 1980; Nisbet and Watt, 1984; Hitchcock and Hughes, 1995,
1. What exactly is a case?
2. How are cases identified and selected?
3. What kind of case study is this? (Purpose)
4. What is reliable evidence?
5. What is objective evidence?
6. What is an appropriate selection to include from the wealth of generated dates?
7. What is a fair and accurate account?
8. Under what circumstances is it fair to take an exceptional case?
9. What kind of sampling is most appropriate?
10. To what extent is triangulation required and how will this be addressed?
11. What is the nature of validation?
12. How will the balance be struck between uniqueness and generalization?
13. What is the most appropriate form of writing up and reporting case studies?
14. What ethical issues are exposed in undertaking a case study?
A key issue in Case study research is the selection of information.
- Need not always adhere to representativeness ________, unrepresentative but critical incidents or events occur that are crucial to the understanding of the case. Significance rather than frequency is a hallmark of Case Studies offering the researcher an insight into the real dynamics of situations and people.
Two ways/ Types of Case Studies:
Participatory and Non-participatory observation.
Non-participatory observation: Eg. Flander’s I Analysis.
Advantages in Participatory observation:
1. Observations are superior when data are being collected on non-verbal.
2. Able to make appropriate notes about its salient features.
3. Bee of its extended period of time researchers can develop more intimation and informal relationships.
4. Case Study observations are less reactive unless others in answering questionnaires.
Designing Case Studies
Yin(1994) identified 5 components.
1. A Study’s question.
2. Its proposition, if any.
3. Its unit(s) of analysis.
4. The logic linking the data to the propositions.
5. The criteria for interpreting the findings.
- How and why of study – their definition is the first task of the research.
- An exploratory study would have a purpose. The unit of analysis defines whether the case is linking the data to propositions and the criteria for interpreting the findings are the vast developed aspects in the case studies.
Campbell (1975) described “Pattern – matching”, it is for internal validity – only in explanatory case.
External validity is in generalizable beyond the immediate case.
Reliability is achieved in many warp case studies. One such is development of the case study protocol.
• An overview of the case study project (Objectives, issues, topics being investigated).
• Field procedures (Credentials and access to sites, sources of information)
• Case study questions (specific questions – investigator must keep in mind during data collection)
• A guide for case study report (Outline, format for the narrative)
6 sources of evidence:
- Documents (Performance data)
- Archival records (Service records, original list of names etc.,)
- Interviews
- Direct observation
- Participant observation
- Physical artifacts
Mills and Hukeman (1954) suggested analytic techniques such as
- Rearranging the arrays
- Placing the evidence in the matrix of categories
- Creating flow charts or data displays
- Tabulating the frequencies of different events
- Using mean variances and cross tabulations to examine the relationship between variables and other such techniques to facilitate analysis.
4 parts in analysis
1. Show that the analysis relied on all the relevant evidence.
2. Include all major rival interpretation in the analysis.
3. Address the most significant aspect of the cast.
4. Use the researcher’s prior, expert know to further analysis.
• Representationer: in comparison, one in generalizing to a theory based on cases selected to represent dimensions of that theory.
• Case selection should be theory driven.
• Cross – theoretic case selection.
• Pattern matching – to establish – to find out quantitative and consistency qualitative evidence.
• Process tracing -
a. Controlled observation (On key variables)
Time – series analysis
(Not just before and after observation but outside the range of normal fluctuation of time series)
• Congruence testing (Suitableness)/ (What are identical agreement cases) when there are a large number of cases it could ne statistical methods of correlation and control.
• Explanation – building. Triangulation.
Ethnographic Research or Naturalistic inquiry:
• Ethnic means pertaining to the custom, dress, food etc., of a particular racial group or cult.
• Ethnography means the scientific descriptions of the races of the earth.
• Ethnographic means pertaining to ethnography.
• Combination of participant and non participant observation.
• Obtain a holistic picture of a particular society, group, institution, settling or situation.
• Portraying the everyday experiences of observing and interviewing.
• Seldom initial to precise hypothesis.
Eg. What is life like in an international public school?
Making use of all the tools to get data.
Advantages:
1. A comprehensive perspective.
2. Research topics those are not easily quantified.
3. Study a group behavior over a time.
Disadvantages:
1. No way to check the validity of the researcher’s conclusions. As a research observers bias is impossible to eliminate.
2. Since – a single situation – no generalizability.
3. No hypo – so relationships between variables is unclear.
4. Pre planning and review are less useful. So pitfalls in the methodology are unidentified. Risk prone.
5. Time consuming.
Ethnography is suitable for the following:
• Those by nature defy simple quantification.
• Those best be understood in a natural setting.
• Those involve the study of individual or group activities over time.
• Roles of the educators play and behaves associated to the roles.
• Study of activities of a key.
• Involving the study of formal organization.
Ex:
1. Rural – teacher
2. Questioning at home and at school; a comparative study.
Research must include ‘thick’ descriptions.
Researchers are the instruments of the research.
Attribution of meaning is continuous overtime.
Researchers generate rather than test hypotheses.
Theory generation is derivative – grounded – the data suggested the theory rather than vice versa.
• Purposive sampling enables the full scope of issues to be explored.
• Research design emerges over time.
• Applications are tentative and pragmatic.
• Trustworthiness and its components replace more conventional views of reliability and validity.
• Needs propositional knowledge and tacit knowledge.
• The researcher becomes the human instrument in the field building on her tacit knowledge in addition to her propositional knowledge. Using methods that fit comfortably to human inquiry. Eg. Observation, interviews, documentary analysis, unobtrusive methods. The advantage of her research is her adaptability, responsiveness, knowledge, ability to handle sensitive matters, ability to set the whole picture, able to clarify the summarize to explore, to analyze, to examine typical or idiosyncratic responses.
• Ethnography has extensive use of unstructured observations, conversations, documented by detailed field notes from basis for this type of research.
• It attempts to explain the uses of interdependence of group behaviors and interactions.
Strategies to enhance Trustworthiness:
1. Bracketing in the process of the researcher becoming self – aware and reflecting on the research process and her own assumption.
2. Prolonged engagement and persistent observation.
3. Multiple data sources. It will provide opportunities for comparison of data among and bit participants as well as different types of data sources.
4. Participant checking. (follow up and feed back)
5. Peer researcher support group throughout the research process, support froup members review and comment on transcripts of participant observation, interviews, discuss memos written by the researcher and provides a forum for discussing researcher’s ideas.
6. Conformability and dependability of the results – rich description.
Stage 1. Locality a field of study.
2. Addressing ethical issues.
3. Deciding the sampling.
4. Finding a role and managing entry into the context.
5. Finding information.
6. Developing and maintaining relations in the field.
7. Data collection in situation.
8. Data collection outside the field.
9. Data Analysis.
10. Learning the field.
11. Writing the report.
Observation:
• Like data from ‘live’ situations.
• To understand the context of the program to be open – ended and inductive, to see things that might otherwise be unconsciously missed.
• Discover things.
• Access personal knowledge.
• Observation enables the researcher to gather data analysis.
• The physical setting.
• The human setting.
• The interactional setting.
• The program setting.
From unstructured to structured.
From responsible to pre-ordinate (Worked out in advance).
A semi – structured – in a far less systematic manner.
Hypothesis generating
From complete participation to a detachment
Closeness and distance
Familiarity and strangeness
Quantitative research tends to have a small field of focus, fragmenting the observed into minute chunks that can subsequently be aggregated into a variable.
Qualitative research – Situation unfold
‘Fitness of Purpose’
Structured: the research will need to decide
1. The foci of the observation (eg. People as well as the event)
2. The frequency of the observations (eg. Every 30 seconds, every minute, every 2 minutes)
3. The length of the observations period (eg. 1 hour, 20 minutes)
4. The nature of the entry (1 coding system)
Student to student
Student to students; Student to teacher; Students to Teacher; Teacher to Students; Teacher to Student; Student to self; Task in hand; Previous task; Future task; Non – task. (Every 30 seconds)
There are 4 principal ways of entering data onto a structured observation schedule;
Event Sampling:
Teacher shouts at the child I I I I I
Child shouts at the child I I I
Parents shouts at the teacher I I
Teacher shouts at the parents I I
Instantaneous sampling: otherwise called time sampling:
If it is important to know the chronology of events, then it is necessary to use Instantaneous sampling, observation at standard intervals of time.
Interval Recording
Entry at fixed intervals. However instead of charting what is happening on the instant, it charts what has happened during the preceding interval.
It enables frequencies to be calculated simple pattern to be observed and an appropriate sequence of events to be noted, because it charts what has taken place in the preceding interval of time.
Rating Scale:
5 point scale
1 – not at all
2 – very little
3 – a little
4 – a lot
5 – a very great deal
Why Observation:
Delude – deceive, cause to accept what is false or true.
1. We may delude ourselves about what is happening. We observe what we want to study, harmonious relationship; effective practice, rules that are consistently followed.
Content Analysis
Analysis the content of speeches, essays, documents, journals, films, music and others. It is a research technique for the objective, systematic and quantitative description of the manifest content of communication.
Procedure:
1. Specification of objectives. Indicates the purpose. Eg. School text book – it it is affective objectives in the lessons the categories of such objectives listed and categorized accordingly.
2. Hypotheses formulation derived after the review of the research. Research hypotheses should not be written in the inner form and should predict single difference or relationship. Each term in hypothesis should be clearly defined and should be testable.
3. Sampling specific samples of content. There are 3 universal from that which samples may be drawn (i) titles (a specific Npap, a lesson in a text book or journal), (ii) issues, (iii) content found in issues or titles. Eg. Sports in newspaper.
4. Determining the categories; According to Bateson, (i) ‘What is said’, (ii) ‘How it is said’ (i) includes ‘subject matter’ (i.e., what the communication is about) Direction, standards, values, methods, traits, actor, authority, origin and target. (all of them, refer to the ‘wh or whi’ categories. Ie., information or facts) (ii) ‘form’ or type of communication, form of statement, intensity and device included denoting higher order thinking.
5. Category Analysis: As per the nature and purpose of the investigation the categories are analyzed into components of sub categories.
6. Quantification: determining the frequency of what each of the units.
7. Standardizing the procedure: the coding procedures must be tried the persons who actually do the categories. Validity reliability and objectives need to be ensured when coding is done.
8. Reliability: The accuracy of coding determines the reliability of quantitative data. In some instances, inter observer reliability is established, by the extent to which a number of coders can consistently analyzed the same data. Another method of establishing such reliability is by having one or more observers coding the same data over a period of time. Serious limitation concerning the reliability of qualitative material should be avoided, when coding is done. Otherwise, the investigatory findings will suffer from several defects. Content analysis thus offers a research technology for document analysis.
1. Definition and categorization of Unit:
The universe of the content is to be analyzed. It is a reflection of the theory or hypotheses being tested. It spells out, in effect, variables of the hypothesis.
2. Units of Analysis:
Berelson lists 5 major units of analysis; words; themes; characters; items and space and time measures.
a. Eg. Value words and non – value words. Difficult medium and lazy works.
b. The theory is useful but more difficult unit. Eg. Discipline – an interesting larger themes, child training.
c. Characters and space – and – time measures. Character – number of inches, pages, paras, number of matters of discussions and so on.
d. Items – is important an essay, news story, tele – program, class recitation or discussion, autobiography.
Quantification:
1. Nominal measurement – frequency.
2. ranking – ordinal measurement judges may asked to rank according to specific criteria.
3. rating – children’s pieces can be rated for degrees of creativity, originality, achievement orientation interests, values and other variables.
Before quantification check whether they are representative and to count carefully, the category items are in sufficient numbers. Otherwise, generalization from statistics calculated from them is unwarranted.
Selection of materials for analysis is of theory do not fulfill. The conduction those can be used for suggestive purposes and not for relating variables to each other.
Survey Method:
Cross – sectional longitudinal
Steps:
1. Problem definition
2. Identification of the target people
3. Mode of data collection
i. Direct administration
ii. Mail survey
iii. Telephone survey
iv. Personal interview
4. Selection of the instrumentation
5. Preparation of the instrumentation
1. is this a question that can be asked exactly the way it is written.
2. Is this a question that will mean the same thing to everyone?
3. Is this a question that people can answer.
4. Is this a question that people will be willing to answer, given the data collection procedures?
5. Types of questions:
Enclosed – ended of
- Unambiguous
- Keep the focus as simple as possible
- Of short
- Use common language
- Avoid the use of the words that might ‘bias’ responses.
- Avoid leading questions
- Avoid double questions
6. presenting the questionnaire
Preparation of the cover letter. Threats to the validity of instrumentation process. Location, history, instrumentation.