Quantitative Research Plan

Quantitative Research Plan (40%)
Students will identify a research topic (continuation from qualitative assessment, or a new
topic) of personal interest and apply their knowledge of mixed methods or correlational
methods or experimental methods (including quasi-experimental and single case) to
develop a research plan for a proposed study. Students will be required to consider the key
elements of one of these methodological approaches in order to critically evaluate research
evidence. Students will be asked to
1. describe the research aims and rationale for the study
2. outline the main research questions and their corresponding hypotheses
3. describe the methodological approach and identify how it be able to “answer” the
research questions
The assessment has a word limit of 3000 words.
Chapter 5 in all editions of Johnson & Christensen focuses on writing a research proposal.
Your assignment covers the Introduction and Methods sections.
The assessment will include:
Introduction (suggested 1500 words)
1. A brief introduction to the research topic area that includes reference to a minimum
of five (5) academic research articles. From these (and any other relevant papers)
identify any implications / gaps for the research field that help to highlight the
importance of your research topic (see Table 1 for some suggestions). Is there a
particular professional or societal concern that your research is addressing?
2. For one (1) of the quantitative articles cited in the introduction, you will conduct a
detailed evaluation of quality of the research study. You can use the criteria
presented in Table 2 to guide this evaluation. You should include this as an Appendix
at the end of your document.
3. Include a statement that reflects the aim of your proposed research, and which
includes your research questions and their corresponding hypotheses. These should
align with your chosen methodological approach.
Methods (suggested 1500 words)
Participants
1. A brief description of your intended participants, including relevant information such
as age, gender, inclusion and exclusion criteria, where are they recruited from?
2. The sampling strategy to be used (along with justification).
Design, Procedure, & Measurement
1. A description of your data collection design – this should align with your chosen
methodological approach.
2. What will participants be required to do?
3. A description of an assessment tool that will be used to measure your dependent
variable (if you have multiple dependent variables, select one), addressing issues of
measurement validity and reliability.
4. Rationale for selection of variables (i.e., how they address the Research Question(s))
5. The identification of threats to internal validity and measures taken to address these
threats.
6. Discuss how feasible you consider this project to be. What changes could be made to
make it more feasible? What are the trade-offs between feasibility and inferences
that can be drawn?
Table 1. Ways in which prior studies can provide ideas for new studies.
Method Rationale
Replication If you think a study shows findings of significant educational
importance, you might want to repeat the study with a new
sample to verify the findings.
Testing the external
validity (i.e.,
generalizability) of a study
You might have read a laboratory-based study that has
significant educational implication, but you want to find out
whether the laboratory methods would work equally well in
the classroom.
You might have read a correlational or mixed method study
that was conducted in primary schools in Canada, but you
want to see if the findings also apply to secondary school
students in Australia
Improving a study’s
internal validity
You might read a study and realise that the study did not
control one or more important variables, and the lack of
control of these variables led to an ambiguous interpretation
of the results.
Reconciling conflicting
findings
When study findings conflict, look for differences between
the studies that might be the cause of the conflict, e.g.,
differences in study design, participants, or measurement
instruments.
Suggestions for future
research
In the discussion section of a paper, authors often make
suggestions for future research which could form the basis of
your own study idea.
Consideration in current
context/climate
You might read a study that was conducted before important
policy or practical changes were implemented. You want to
see if the findings of this research still hold in the current
educational climate.
Table 2. Paper evaluation
Common to all approaches
Is the purpose of the research clearly stated?
What quantitative approach is used in the study?
Are there clearly stated research questions that align with the purpose of the study and
the quantitative approach?
Was an appropriate method of sampling used, given the purpose of the study?
Are the demographics and context of the participants accurately and comprehensively
described and are these participants appropriate for the study?
Are any research instruments valid and reliable for the participants of this study?
Is there anything in the procedure that may have biased the results?
How well were likely confounding factors identified and controlled?
Can the results be generalised to the populations and settings the researcher desires?
Do you think there were any significant flaws in the study design?
Specific to each approach
Correlational Experimental Mixed Methods
Do the research questions
and hypotheses use
appropriate circumspect
language?
Is the experimental research
design clearly explained?
Is it clear why a combination
of quantitative and
qualitative approaches was
the best way to address the
research questions?
Have the authors
articulated a clear
theory/rationale for their
study and variable
selection?
Did manipulation of the
independent variable
adequately represent the
causal construct of interest?
Is the mixed methods
research design clearly
explained?
Are the research objectives
& time dimensions clearly
stated
Were participants randomly
assigned to conditions? If
not, how was non-random
assignment decided and
what measures were taken
to ensure comparability?
Were qualitative and
quantitative data collected
that allowed the
researchers to address their
research questions
effectively?
Does the description of the
data collection (e.g., survey)
match the time dimensions?
Did the researchers take
appropriate actions to
control for extraneous
variables?
Were validity enhancing
strategies used for each part
of the study?
Can the variables included
in the study adequately
address the research
questions?
Is there any evidence of
differential attrition from
the study?
If part of the study was an
experiment, was random
assignment used? If not,
how was non-random
assignment decided and
what measures were taken
to ensure comparability?
What “third variables” have
been accounted for? Which
have not?
Does the study describe any
specific training required to
implement the
intervention?
Does the study describe
detailed intervention
procedures?
Does the study report on
implementation fidelity?
Does the study clearly
describe baseline (singlesubject studies) or control
conditions (group
comparison studies)?