ABSTRACT
This paper presents the study's mixed methods designed to explore how students' English proficiency, online instructor guidance and online collaboration influence the students' online learning in an introductory information technology course offered by Hong Kong's cyber higher education. The design of mixed methods used in this study mainly adopted Creswell and Plano Clark's (2011) follow-up explanations variant of the explanatory sequential design of mixed methods. The reliability and validity of research, which have been discussed in both quantitative research and qualitative research, have to be reconsidered for mixed methods in order to reflect the multiple methods of establishing the research's trustworthiness. This paper discusses these issues. So far, the major findings of this research using the mixed methods are presented in Wong's (2015) quantitative phase, which revealed that students' English proficiency exhibits the largest effect while online instructor guidance and online collaboration have a moderate effect on the students' learning in cyber education. These quantitative findings provide implications for the qualitative phase of the mixed methods as a follow-up to explore the cause-effect relationship between the variables and the students' learning in cyber education.
Keywords: Reliability, Validity, Mixed Methods, Online Learning, Students' English Proficiency, Online Instructor Guidance, Online Collaboration
INTRODUCTION
Inspired by the differences between the research findings of Wong (2008) and many other findings (e.g., Aberson, Berger, Healy, Kyle, & Romero, 2000; Johnson, Aragon, Shaik, & Palma-Rivas, 2000; Lim, Kim, Chen, & Ryder, 2008) and Asian students' learning characteristics as noted in Chin, Bauer, and Chang (2000) and Wong (2012), Wong (2015) randomly selected 75 students from each of the three groups in Wong's (2008) quasi-experiment and invited them to complete the survey study. The participating students in the three groups were associate degree and higher diploma students in a Hong Kong higher education college ("the college"). They were all registered for an introductory information technology (IT) course offered by the college and were randomly assigned to three teaching method groups. The first group was a classroom teaching group; the second group was online teaching group; and the third group was a hybrid group in which the participating students used a combination of both classroom teaching and online teaching methods. The survey was targeted to obtain the participants' different views across the different teaching methods to explore the relationship between students' English proficiency, instructors' guidance in online discussion forums ("online instructor guidance"), and peer collaboration in online discussion forums ("online collaboration") and the students' online learning performance in the online introductory IT course. Correlation and multiple regression analyses were performed using the statistical software tool called Statistical Package for the Social Sciences (SPSS) version 17.0 to identify the relationship. Correlation analysis was adopted to identify whether the students' English proficiency, online instructor guidance and online collaboration could be potential factor(s) affecting the students' learning performance. Multiple regression analysis was used to explore the combined effect of the students' English proficiency, online instructor guidance and online collaboration on their learning performance. The major findings of this study revealed quantitatively that students' English proficiency has the largest effect while online instructor guidance and online collaboration have a moderate effect on the students' learning in cyber education. However, these findings could not confirm the cause-effect relationship between these three variables and the students' learning in cyber education.
To confirm the cause-effect relationship, the researchers designed mixed methods consisting of Wong's (2015) quantitative approach and the qualitative approach proposed in this paper, which aimed to explain Wong's (2015) quantitative findings. The design mainly adopted one of Creswell and Plano Clark's (2011) explanatory sequential design of mixed methods called the follow-up explanations variant, in which a quantitative approach is first used to discover the quantitative relationship, and then a qualitative approach is adopted to obtain in-depth understanding to establish explanations, as illustrated in Figure 1. In this design, the quantitative and qualitative approaches in different phases are complementary and are executed in that order, as the explanation in the qualitative phase depends on the findings from the quantitative phase.
The literature has discussed the challenges and benefits of using mixed methods in research. The researchers present the literature review and provide justifications for using the mixed methods designed for this research study.
LITERATURE REVIEW
The literature has been debating the suitability of the research paradigm, meaning a researcher's theoretical perspective of looking at the world (Lincoln & Guba, 2000), for research using mixed methods. In the paradigm debate for mixed methods, four stances were identified. They are pragmatism, transformative paradigm, multiple paradigms in mixed methods and the paradigm depending on the design of the mixed methods.
Some scholars (e.g., Cherryholmes, 1992; Murphy, 1990; Tashakkori & Teddlie, 2003) argue that pragmatism, in which researchers choose and mix quantitative and qualitative methods to explore objective and subjective knowledge (Dewey, 1933), is suitable for studies using mixed methods. In pragmatism, both deductive and inductive approaches are mixed. A deductive approach provides objective ways to measure data, which in turn builds theory, but it is not appropriate for understanding meanings and explaining behavior. In contrast, an inductive approach is used for discovering patterns, consistencies and meanings of behavior from the analysis of the collected data, but it may not avoid subjective opinions.
Mertens (2007, 2009) proposed a transformative paradigm for social studies using mixed methods. Quantitative and qualitative methods within the transformative paradigm are mixed to obtain a deeper understanding of reality that is socially constructed (Mertens, 2007). The researchers representing the transformative paradigm for mixed methods include critical theorists, participatory action researchers, Marxists, feminists, racial and ethnic minorities, persons with disabilities and members of Indigenous communities (Mertens, 2015, p. 21). In this stance, mixing the quantitative and qualitative methods is necessary to accommodate the different beliefs, cultures, disabilities, languages, reading and writing skills, genders, classes and races for specific sub-groups in a population (Mertens, 2009). For example, mixed methods were used by Meadow-Orlans, Mertens, and Sass-Lehrer (2003) to study parents' experiences with their young deaf children in three sequential phases. The first phase was a quantitative survey, the second phase was individual interviews and the third phase was group interviews. The first quantitative phase was used to determine the different characteristics (e.g., races, parent hearing statuses, levels of parent education, socioeconomic statuses) of the families to be studied. The quantitative findings led to the different interview protocols for different characteristics of the families and in-depth focus group meetings.
Other scholars (e.g., Greene, 2007; Greene & Caracelli, 1997, 2003; Greene & Hall, 2010) support multiple paradigms in mixed methods. Greene (2007) regards this stance as "...multiple ways of seeing and hearing, multiple ways of making sense of the social world, and multiple standpoints on what is important and to be valued and cherished" (p. 20). Greene and Caracelli (1997, 2003) and Greene and Hall (2010) supported that researchers can use multiple paradigms in mixed methods to explore differences throughout the social world and obtain better a understanding of the inherent complexities and multifacets of human phenomena.
Creswell and Plano Clark (2011) advocate different research paradigms used in different mixed methods design types. For example, pragmatism is suitable for Creswell and Plano Clark's (2011) convergent parallel design of mixed methods, as its main purpose is triangulation through comparing and validating the quantitative and qualitative findings (Creswell & Plano Clark, 2011, p. 78). For another example, in Creswell and Plano Clark's (2011) explanatory sequential design of mixed methods, the exploratory quantitative phase is followed by the explanatory qualitative phase. For this design, Creswell and Plano Clark (2011) supported the stance of multiple paradigms in mixed methods, as researchers typically begin from a post-postivism perspective in the exploratory quantitative phase, then shift to a constructivism perspective in the explanatory qualitative phase (p. 83). On the other hand, a variant of this design, called the follow-up explanations variant of the explanatory sequential design of mixed methods (Creswell & Plano Clark, 2011) has a quantitative emphasis that calls for an explanation and use of the research paradigm post-positivism (Hanson, Creswell, Plano Clark, Petska, and Creswell, 2005; Plano Clark and Creswell, 2010, p. 66). The researchers accepted the standpoint of post-positivism and adopted this follow-up explanations variant of the explanatory sequential design of mixed methods in this study.
As pointed out by Hanson, Creswell, Plano Clark, Petska, and Creswell (2005) and Razzhavaikina (2007), it is time-consuming and complicated to carry out different stages in the sequential design of mixed methods, but there are advantages to using mixed methods. Mixed methods can provide answers for different research questions (Denzin, 1989; Gray, 2014; Sieber, 1973; Strauss, 1987). Quantitatively, correlation analysis can be used to determine how each variable and a student's test score are correlated. However, two variables having a high correlation may not be causally related (Weiss, 2012, p. 659). In this study, correlation and multiple regression analyses could not confirm that the variables and the student's test score are causally related. Interviews can be used in conjunction with surveys to follow up on issues (Cohen, Manion, & Morrison, 2011). In this study, interviews could be conducted to obtain the perceived reasons for the relationship between the variables and the students' learning and the participants' views on their experiences using online education and how to develop effective learning in cyber education.
Gray (2014) also stated:
The second reason for using multiple methods is that it enables... data triangulation as the collecting of data over different times or from different sources. This approach is typical of cross-sectional designs. Methodological triangulation is also possible, with the use of a combination of methods such as case studies, interviews and surveys. (p. 37)
In accordance with Bush (2002), triangulation is not limited to asking the same questions to different participants; it can be done through methodological triangulation. That is, using different methods to explore the same issue.
In mixed methods, the quantitative and qualitative methods are combined to counter each method's weaknesses. The research using mixed methods has complementary strengths (Brewer & Hunter, 1989; Johnson & Turner, 2003; Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). According to Cohen, Manion, and Morrison (2011), "one advantage, for example, is that it [interview] allows for greater depth than is the case with other methods of data collection" (p. 411). In-depth understanding can be obtained by asking probing questions in interviews compared to the use of questionnaires (Cohen, Manion, & Morrison, 2011). For this research, there was difficulty in asking probing questions in the quantitative phase, while finding the characteristic of a large sample was not feasible in the qualitative phase. The different approaches in different phases could be adopted to counter each other's weaknesses.
MIXED METHODS
The research design for this study contains two phases in sequence. Phase 1 is a quantitative phase. Phase 2 is a qualitative phase. In Phase 1, analytical surveys using correlation and multiple regression analyses were adopted to examine the relationship between the potential factors (or predictor variables) and the participating students' learning as reflected by their test scores obtained in Wong (2008). The quantitative findings in Phase 1 were identified for further investigation in Phase 2. In Phase 2, semi-structured interviews were conducted in a case study to explore the causal relationship and explain the quantitative findings from Phase 1. Phase 1 also includes Wong's (2015) analytical survey. Phase 2 contains the proposed case study. The main focus of the proposed research was the explanatory design. Some of the findings of both phases could be compared and contrasted for triangulation. Figure 2 presents the proposed research design in which Phase 1 is followed by Phase 2, as indicated by the arrows in the main flow. The complementary triangulation flow is indicated by the dotted arrows.
The mixed methods suggested for the proposed research were sequential execution of the quantitative and qualitative approaches. For the first quantitative phase, correlation and multiple regression analyses of the collected quantitative data were performed, then the quantitative findings were identified for further investigation in the second qualitative phase. In the second qualitative phase, a qualitative analysis of the collected quantitative data was carried out. Also, a quantitative analysis of the collected qualitative data was performed by quantifying the data. To quantify data, Sandelowski (2011) explains that "qualitative 'themes' are numerically represented, in scores, scales, or clusters, in order more fully to describe and/or interpret a target phenomenon" (p. 231). As stated by Johnson and Christensen (2012), "this [quantifying data] allows researchers to understand how often various categories or statements occurred in qualitative data" (p. 540). Finally, both quantitative and qualitative findings were integrated for interpretation and triangulation.
Phase 1 - The Quantitative Phase
In the Phase 1, Wong (2015) used a stratified random sampling with proportional allocation (Weiss, 2012, p. 19) to select 75 participants from each of the three groups in Wong's (2008) quasi-experiment. The sample size of 75 was based on the threshold N > 50 + 8w (Tabachnick & Fidell, 2013, p. 123) for multiple regression, where w is the number of predictor variables. In this study, with three predictor variables, the sample size of 75 is larger than the threshold 50 + 8 χ 3 = 74. The three groups are shown, as stated in Wong (2015), as follows:
One group of 100 students, as a control group, was assigned to teaching method 1 in which the students had to attend both lectures and tutorials in classrooms as they did in classroom teaching. The other two groups were the experimental groups. One group of 100 students used teaching method 2. They did not attend lectures or tutorials in classrooms and learned mainly through reading the materials in the college's cyber education system and discussing them with other students and instructors in the discussion forum in that system. The remaining group of 100 students used teaching method 3. The students were not required to attend lectures, but they were required to attend tutorials and learn through the materials posted in the college's cyber education system. (p. 117-118)
Then, the researchers used SPSS to carry out correlation and multiple regression analyses in order to identify the relationship between the predictor variables (i.e., students' English proficiency, online instructor guidance and online collaboration) and Wong's (2008) students' test scores.
Phase 2 - The Qualitative Phase
For Phase 2, the researchers suggested a case study that is defined by Bassey (1999) as follows:
An educational case study is an empirical enquiry which is conducted within a localized boundary of space and time, into interesting aspects of an educational activity, or programme, or institution, or system, ... such that sufficient data are collected for the researcher to be able to explore significant features of the case, to create plausible interpretations of what is found, ..." (p. 58)
For the case study in this qualitative phase, the empirical enquiry includes the interviews, the localized boundary is the college students taking the introductory IT course and the aspects of the educational activity are the three teaching methods.
For this case study, the researchers used a stratified purposeful sampling (Gall, Gall & Borg, 2007, p. 182) to sample eight students in each of the strata of teaching methods for interviews that were designed to obtain characteristics and variations of the students' learning across different teaching methods. In contrast to the large sample (three 75-student groups) needed in the quantitative Phase 1 to identify the association between some variables and the test scores, the interviews with the small samples (three 8-student groups) facilitated the provision of detailed descriptions and explanations in the qualitative Phase 2. As stated by Creswell (2014), if the intent of the qualitative phase is to explain the results obtained from the quantitative phase, then the qualitative sample is the same group of individuals from the initial quantitative sample (p. 224). Member checking was performed by presenting the recorded interviews and the interview transcripts to the interviewees for confirmation. The steps of the qualitative analysis are shown in Table 1.
The researchers used content analysis as it was suitable for deduction in this second explanatory phase. Coding was involved in content analysis. Deductive codes, which are a provisional starting list of codes prior to coding (Miles, Huberman, & Saldaña, 2014, p. 81), were derived from the research questions and the quantitative findings. For increasing the analysis reliability, two other coders were invited to make inferences which were compared and measured with Krippendorff's (2004a, p. 221-236; 2004b) alpha. To compute Krippendorff's alpha in SPSS, the researchers used the macro for computing the alpha in SPSS, which was available at the web address http://afhayes.com/spss-sas-and-mplus-macros-and-code.html (Hayes & Krippendorff, 2007). Compared with Cohen's (1960) kappa, which measures the agreement on only two coders' nominal variables and Fleiss' (1971) method for more than two coders, Krippendorff's alpha corrects for missing data and provides a versatile inter-coder reliability measure on nominal, ordinal and interval variables between two or more than two coders (Bernard & Ryan, 2010, p. 304).
RELIABILITY AND VALIDITY
As the proposed research involved using quantitative and qualitative approaches sequentially, the reliability, which refers to "a matter of whether a particular technique, applied repeatedly to the same object, yields the same result each time" (Babbie, 2014, p. 152), and validity, which refers to "the correctness and truthfulness of an inference that is made from the results of a research study" (Christensen, Johnson & Turner, 2014, p. 159), of each of these approaches at different phases can be addressed. In addition, the reliability and validity of mixing quantitative and qualitative techniques sequentially in the proposed research had to be contemplated.
Internal validity, which is about how researchers infer that a relationship between two variables is causal (Cook & Campbell, 1979, p. 37), was not achieved when considering it in each phase individually. Quantitatively, in Phase 1, the analytical results from correlation and multiple regression cannot guarantee the cause-effect relationship. Qualitatively, in the second phase, the results from the content analysis on the interview transcripts could confirm the cause-effect relationship, but it was questionable if these analytical results from the small sample (three 8-participant groups) could be generalized to the population of the college students. This limitation was overcome by the quantitative results from the large sample (three 75-participant groups). The researchers used methodological triangulation to justify the internal validity by comparing the qualitative findings obtained from the interview transcripts with the quantitative findings obtained in the first phase. The findings from Phase 1 provided criteria for follow-up confirmation of the cause-effect relationship in the second qualitative phase. Triangulation can be achieved by comparing the findings from the first quantitative and the second qualitative phases and by looking for the convergence and complementarity of these findings (Kelle & Erzberger, 2004).
The researchers also considered external validity, which refers to the validity of inferences about whether the cause-effect relationship results of the study can be generalized to and across populations (Shadish, Cook, & Campbell, 2002, p. 38). This is based on the assumption that there are regularities in the population. As it is not feasible to investigate the regularities in the population, sampling is needed. In the first quantitative phase, the inferential statistics like significance values in the quantitative analyses indicate that the characteristics of the accessible population can be inferred from the characteristics of the sample in this study. In Phase 2, the analytical results were based on a smaller subset of the sample from Phase 1. The external validity could be achieved by rough generalization. Rough generalization can be made by naturalistic generalization, which is the process of generalizing on the basis of similarity. That is, the findings from the second phase together with the findings from the first phase can be generalized to the students taking the introductory IT course.
Reliability and Validity of the Quantitative Phase 1
For reliability in Phase 1, the researchers measured internal consistency reliability with Cronbach's (1951) coefficient alpha. Each of the questionnaire questions contains 5 similar items, each of which was measured on a 7-point Likert (1932) scale ranging from "strongly agree" (7) to "strongly disagree" (1) with an additional option "not available" (0). To prevent response bias, the wording of the last two items is reversed and scoring is reversed from "strongly agree" (1) to "strongly disagree" (7). Internal consistency is reflected by similar scores in similar items of a question and can be measured with Cronbach's coefficient alpha, which should ideally be above 0.7 (DeVellis, 2012; Nunnally, 1994). The researchers ensured that the participants understood the questions and gave reliable answers by offering explanations if needed when the participants completed the questionnaire in the meetings.
In Phase 1, two validities were involved, as indicated in Wong (2015). They are statistical conclusion validity and construct validity. Statistical conclusion validity refers to the validity with which a researcher can infer how two variables are related (Christensen, Johnson, & Turner, 2014, p. 160). The analytical results in this quantitative phase showed the relationship between students' learning and the variables. Making inferences about the relationship involves inferential statistics, such as hypothesis testing. For the case in this study, the null hypothesis was that there is no relationship between any or all of the independent variables (i.e., students' English proficiency, online instructor guidance and online collaboration) and the dependent variable students' learning as reflected by the students' test scores. The significance values or probability value (p-value) generated by SPSS indicated whether the analytical results are statistically significant and valid.
Construct validity depends highly on how the construct is operationalized. For this study, the students' English proficiency was operationalized by their English proficiency marks. Like Gerber, Grund, and Grote (2007), the researchers counted the instructors' posted content-related and language-related messages in the discussion forum to operationalize online instructor guidance, and then they counted the peer students' posted content-related and language-related messages in the discussion forum to operationalize online collaboration. The dependent variable students' learning was operationalized by test scores. Convergent validity is a way to access construct validity and is obtained by examining the degree to which the operationalization converges on similar operationalizations. This validity can be evaluated by statistical procedures (Creswell, 2012). For example, the learning effectiveness of the teaching methods obtained from the questionnaire could be used to statistically compare them to the participants' rankings of the teaching methods from the questionnaire and then test them to determine if convergence occurred.
Reliability and Validity of the Qualitative Phase 2
The reliability of Phase 2 was achieved by involving two or more coders in analyzing and interpreting the qualitative data in the interview transcripts. For reliability, the researchers investigated the consistency of different coders' analysis and interpretation of the qualitative data by measuring the inter-coder agreement on coding interview transcripts using Krippendorff's alpha.
In this qualitative phase, descriptive validity and interpretive validity were considered. Descriptive validity refers to the accuracy of the account reported by a researcher (Christensen, Johnson, & Turner, 2014, p. 346). The researchers achieved descriptive validity by investigator triangulation - two other coders were invited in the qualitative analysis, all coders cross-checked each other's descriptions of the interview transcripts and discussed the information to come up with agreed-upon valid descriptions of the analytical results.
According to Johnson and Christensen (2012), interpretive validity refers to "portraying accurately the meanings attached by participants to what is being studied by the researcher" (p. 265). This validity was achieved by member checking (or participant feedback), which is the respondents' checking of the interpretations from the interview transcripts (Pidgeon, 1996, p. 84).
Reliability and Validity of the Mixed Methods
Reliability deals with the consistency of the findings of a research study and its replication. For the reliability of the proposed mixed methods, methodological triangulation could be achieved by comparing the quantitative and qualitative findings.
The proposed research does not use concurrent mixing. Therefore, the types of mixed research validity pertaining to concurrent mixing approaches, such as inside-outside validity and commensurability mixing validity identified by Onwuegbuzie and Johnson (2006) are not applicable to the proposed research. Inside-outside validity refers to the extent to which a researcher accurately understands, uses and presents the participants' subjective inside views (or emic views) and the researcher's objective outside views (or etic views). This involves the complicated process of moving back and forth between emic views and etic views, especially in concurrent mixing of quantitative and qualitative methods. Commensurability mixing validity refers to the extent to which a researcher accurately integrates and presents the results of concurrent mixing of quantitative and qualitative approaches. Especially in concurrent mixing of quantitative and qualitative methods, divergent and even contradictory results or views may be found when integrating the results. This validity looks into how validly the researcher deals with the integration of different or even divergent results. As the design of the proposed research is a two-phase sequential design in which the findings in Phase 1 were first identified and followed up with exploring the explanations in Phase 2, contradictory results were not identified in this research.
In addition to the multiple validities presented previously, the researchers examined four of the mixed research validity types proposed by Onwuegbuzie and Johnson (2006). They are paradigmatic validity, weakness minimization validity, sequential validity and sample integration validity. Paradigmatic validity refers to the validity of adopting a research paradigm that enables research methodologies such as mixed methods to be conducted. The research paradigm commonly used in quantitative research is positivism, in which knowledge of reality comes from scientific or empirical quantitative methods. Interpretivism is one of the dominant research paradigms used in qualitative research. Epistemologically, interpretivists capture and construct meaning through qualitative methods. For this research design, which mainly adopts the sequential mixing of different approaches, Plano Clark and Creswell (2010) claimed that it emphasizes the first quantitative phase and follows up with an explanation in the second qualitative phase (p. 305), and the research paradigm used is post-positivism (p. 66). Positivists analyze data to develop theory while post-positivists use a falsification approach to analyze data in order to test theory (Willis, 2007). Critical realism is an appropriate form of post-positivism that takes a falsification approach. The researchers began with knowledge obtained from the quantitative phase, then were critical of the known reality and gathered more views to have a better understanding in the qualitative phase. Epistemologically, the researchers were post-positivist critical realists who recognize that "all observation is fallible and has error and that all theory is révisable" (Trochim, 2006). Therefore, multiple measures are needed to triangulate across multiple fallible perspectives (Trochim, 2006).
Weakness minimization validity is about the validity of compensating the weaknesses of one research approach with the strengths of the other research approach (Christensen, Johnson, & Turner, 2014, p. 364). This validity was achieved by combining the quantitative and qualitative methods and countering each of their weaknesses (Brewer & Hunter, 1989; Johnson & Turner, 2003; Gray, 2014, p. 37). In this study, the weaknesses of the quantitative phase are the difficulty to obtain an in-depth understanding from the participants and the problem of ensuring that two variables are causally related. These weaknesses could be countered by the qualitative interviews used to explore perceived views and the causal relationship. In contrast, the qualitative interview approach is not an effective and efficient way to look for the characteristics of large samples.
As the sequential design of mixed methods was mainly adopted in this study, the researchers had to consider sequential validity, which refers to the extent to which a researcher can ensure that the order of quantitative and qualitative approaches in a sequential design does not bias the results (Christensen, Johnson, & Turner, 2014, p. 364). The results of the correlation and multiple regression analyses in the quantitative phase provided criteria for further investigation in the qualitative phase. A positive correlation between two variables and a combined effect from multiple predictor variables on the outcome variable in the multiple regression in Phase 1 are the criteria for follow-up on the cause-effect relationship investigation in Phase 2. On the contrary, if one variable causes the other variable, then they are most certainly correlated. Also, if more than one variable causes the other variable, then these variables contain effects on the other variables in the multiple regression analysis. Therefore, even if the qualitative phase is performed first before the quantitative phase, different findings from executing the quantitative phase first before the qualitative phase in this study are not expected. In this regard, sequential validity can be achieved because the results in this study are not due to the ordering of the quantitative and qualitative approaches.
Sample integration validity concerns different participants in different samples. Sample integration validity refers to the extent to which a researcher makes appropriate generalizations from mixed samples. If a research using mixed methods uses different participants in different samples, then the researcher has to consider whether these samples have the same beliefs and experience. In this research study, the sample in the second qualitative phase is the subset of the sample in the first quantitative phase. This smaller sample in the second qualitative phase should have the same amount of experience using online education as the sample in the first quantitative phase. The views of this smaller sample in the second qualitative phase should help to obtain better understanding of the findings in the first quantitative phase.
RESEARCH FINDINGS AND IMPLICATIONS
So far, the quantitative phase of the mixed methods has been completed. The analytical results of this phase, as stated in Wong (2015), revealed that each of the three predictor variables (i.e., students' English proficiency, online instructor guidance and online collaboration) and students' test scores are positively correlated and there is a combined effect of these variables on the students' test scores. Among these variables, students' English proficiency has the largest effect, while online instructor guidance and online collaboration have a moderate effect on the students' learning in cyber education. These results indicated that students' English proficiency, online instructor guidance and online collaboration are potential factors affecting students' learning effectiveness.
However, the cause-effect relationship between the variables and the students' learning was not confirmed. This led to the qualitative approach, in which qualitative interviews could be used as a follow-up to confirm the cause-effect relationship and obtain the participants' views on how to develop effective learning in cyber education.
This qualitative approach is significant, as it focused on the students' perspectives of the factors affecting their learning in cyber education so that the researchers can better understand what quality cyber education should be. Cooper (1993) regards students' perspectives as important, as "it [learning from pupils' perspectives] can help us to understand the effects and evaluate the effectiveness of provision and intervention" (p. 129). This qualitative research also explored what affects or contributes to students' online learning. These findings help to enhance learning effectiveness in online education and must be addressed for a college to develop effective learning in their cyber education programs.
DISCUSSION AND CONCLUDING REMARKS
In view of the characteristics of Hong Kong higher education, like college students using less familiar English in order to learn, they are more willing to collaborate in online discussion forums and they need help and guidance in online discussion forums. The researchers identified the variables students' English proficiency, online instructor guidance and online collaboration that might influence their online learning. To design a research study that explored how these variables influence students' online learning, the researchers evaluated different research approaches and came up with a design that mainly adopts Creswell and Plano Clark's (2011) follow-up explanations variant of the explanatory sequential design of mixed methods complemented by triangulation. In this design, the survey in the first quantitative phase explored the correlation and multiple regression between the variables and the students' learning. Then, the interviews in the second qualitative phase followed up to explore any causal relationship between the variables (i.e., students' English proficiency, online instructor guidance and online collaboration) and the students' learning. Inter-coder reliability testing and member checking were performed to ensure the validity and reliability of the study. The proposed research design of mixed methods could bring the advantages of both quantitative and qualitative approaches and complement the disadvantages of each.
So far, the quantitative phase had been carried out by Wong (2015) and these quantitative findings of the relationship between the three variables (i.e., students' English proficiency, online instructor guidance and online collaboration) and students' learning provide implications for the next qualitative phase to explore the cause-effect relationship, the participants' views on their experiences and expected improvements on using the cyber education. The next research focus is the qualitative phase (Phase 2).
REFERENCES
Aberson, C.L., Berger, D.E., Healy, M.R., Kyle D.J., & Romero, V.L. (2000). Evaluation of an interactive tutorial for teaching the central limit theorem. Teaching of Psychology, 27(4), 289-292. http://dx.doi.org/10.1207/S15328023TOP2704_08.
Babbie, E. (2014). The Basics of Social Research. Belmont: Wadsworth/Cengage Learning.
Bassey, M. (1999). Case Study Research in Educational Settings. Buckingham: Open University Press.
Bernard, H.R., & Ryan, G.W. (2010). Analyzing Qualitative Data: Systematic Approaches. Thousand Oaks: SAGE.
Borgatti, S.P., Everett, M.G., & Freeman, L.C. (2004). UCINET 6.69. Harvard: Analytic Technologies.
Brewer, J., & Hunter, A. (1989). Multimethod Research: A Synthesis of Styles. Newbury Park: SAGE.
Bush, T. (2002). Authenticity - reliability, validity and triangulation. In M.Coleman and A.R.J. Briggs (Eds.), Research Methods in Educational Leadership and Management (pp.58-72). London: Paul Chapman. http://dx.doi.org/10.4135/9781473957695.n6.
Cherryholmes, C.H. (1992). Notes on pragmatism and scientific realism. Educational Researcher, 21(6), 13-17. http://dx.doi.org/10.3102/0013189X021006013.
Chin, K.L., Bauer, C., & Chang, V (2000, June). The use of web-based learning in culturally diverse learning environments. Paper presented at the 6th Australian World Wide Web Conference, Cairns, Australia.
Christensen, L.B., Johnson, R.B., & Turner, L. (2014). Research Methods, Design and Analysis (12th ed.). Upper Saddle River: Pearson.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37-46. http://dx.doi.org/10.1177/001316446002000104.
Cohen, L., Manion, L., & Morrison, K. (2011). Research Methods in Education (7th ed.). Oxon: Routledge. http://dx.doi.org/10.4324/9780203224342.
Cook, T.D., & Campbell, D.T. (1979). Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally.
Cooper, P. (1993). Learning from pupils' perspectives. British Journal of Special Education, 20(4), 129-132.
Creswell, J.W. (2012). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research (4th ed.). Boston: Pearson.
Creswell, J.W. (2014). Research Design: Quantitative, Qualitative, and Mixed Methods Approaches (4th ed.). Thousand Oaks: SAGE.
Creswell, J.W., & Plano Clark, V.L. (2011). Designing and Conducting Mixed Methods Research. Thousand Oaks: SAGE.
Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334. http://dx.doi.org/10.1007/BF02310555.
Denzin, N.K. (1989). The Research Act: A Theoretical Introduction to Sociological Methods (3rd ed.). Englewood Cliffs: Prentice Hall.
DeVellis, R.F. (2012). Scale Development: Theory and Applications (3rd ed.). Thousand Oaks: SAGE.
Dewey, J. (1933). How We Think. London: D. C. Health and Co. http://dx.doi.org/10.1037/10903-000.
Fleiss, J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378-382. http://dx.doi.org/10.1037/h0031619.
Gall, M.D., Gall, J.P., & Borg, W.R. (2007). Educational Research: An Introduction (8th ed.). Boston: Allyn and Bacon/Pearson.
Gerber, M., Grund, S., & Grote, G.. (2007). Distributed collaboration activities in a blended learning scenario and the effects on learning performance. Journal of Computer Assisted Learning, 24(3), 232-244. http://dx.doi.org/10.1111/j.1365-2729.2007.00256.x.
Gray, D.E. (2014). Doing Research in the Real World (3rd ed.). London: SAGE.
Greene, J.C. (2007). Mixed Methods in Social Inquiry. San Francisco: Jossey-Bass.
Greene, J.C., & Caracelli, VJ. (1997). Defining and describing the paradigm issue in mixed method evaluation. New Directions for Evaluation, 74(4), 5-17. http://dx.doi.org/ 10.1002/ev.1068.
Greene, J.C., & Caracelli, VJ. (2003). Making paradigmatic sense of mixed methods practice. In A. Tashakkori and C. Teddlie (Eds.), Handbook of Mixed Methods in Social and Behavioral Research (pp.91-110). Thousand Oaks: SAGE.
Greene, J.C., & Hall, J.N. (2010). Dialectics and pragmatism: being of consequence. In A. Tashakkori and C. Teddlie (Eds.), Handbook of Mixed Methods in Social and Behavioral Research (2nd ed.) (pp.119-144). Thousand Oaks: SAGE. http://dx.doi.org/10.4135/9781506335193.n5.
Hanson, W.E., Creswell, J.W., Plano Clark, V.L., Petska, K.S., & Creswell, J.D. (2005). Mixed methods research designs in counseling psychology. Journal of Counseling Psychology, 52(2), 224-235. http://dx.doi.org/10.1037/0022-0167.52.2.224.
Hayes, A.F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 7(1), 77-89. http://dx.doi.org/10.1080/19312450709336664.
Johnson, R.B., & Christensen, L.B. (2012). Educational Research: Quantitative, Qualitative, and Mixed Approaches (4th ed.). Thousand Oaks: SAGE.
Johnson, R.B., & Turner, L.A. (2003). Data collection strategies in mixed methods research. In A. Tashakkori and C. Teddlie (Eds.), Handbook of Mixed Methods in Social and Behavioral Research (pp.297-319). Thousand Oaks: SAGE.
Johnson, S., Aragon, S., Shaik, N., & Palma-Rivas, N. (2000). Comparative analysis of learner satisfaction and learning outcomes in online and face-to-face learning environments. Journal of Interactive Learning Research, 11(1), 29-49.
Kelle, U., & Erzberger, C. (2004). Qualitative and quantitative methods: Not in opposition. In U. Flick, E. von Kardorff, and I. Steinke (Eds.), A Companion to Qualitative Research. London: SAGE.
Krippendorff, K. (2004a). Content Analysis: An Introduction to Its Methodology (2nd ed.). Thousand Oaks: SAGE.
Krippendorff, K. (2004b). Reliability in content analysis: some common misconceptions and recommendations. Human Communication Research, 30(3), 411-433. http://dx.doi.org/10.1093/hcr/30.3.411.
Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 5-53.
Lim, J., Kim, M., Chen, S.S., & Ryder, C.E. (2008). An empirical investigation of student achievement and satisfaction in different learning environments. Journal of Instructional Psychology, 35(2), 113-119.
Lincoln, Y.S., & Guba, E.G. (2000). Paradigmatic controversies, contradictions, and emerging confluences. In N.K. Denzin and Y.S. Lincoln (Eds.), Handbook of qualitative research (2nd ed.) (pp.163-188). Thousand Oaks: SAGE.
Meadow-Orlans, K., Mertens, D.M., & Sass-Lehrer, M. (2003). Parents and Their Deaf Children: The Early Years.Washington D.C.: Gallaudet Press.
Mertens, D.M. (2007). Transformative paradigm: Mixed methods and social justice. Journal of Mixed Methods Research, 1(3), 212-225. http://dx.doi.org/10.1177/15586898073028n.
Mertens, D.M. (2009). Transformative Research and Evaluation. New York: Guilford.
Mertens, D.M. (2015). Research and Evaluation in Education and Psychology (4th ed.). Thousand Oaks: SAGE.
Miles, M.B., Huberman, A.M., & Saldaña, J. (2014). Qualitative Data Analysis: A Methods Sourcebook (3rded.). Thousand Oaks: SAGE.
Murphy, J.P. (1990). Pragmatism: From Peirce to Davidson. Boulder: Westview Press.
Nunnally, J.C. (1994). Psychometric theory (3rd ed.). New York: McGraw-Hill.
Onwuegbuzie, A.J., & Johnson, R.B. (2006). The validity issue in mixed research. Research in the Schools, 13(1), 48-63.
Pidgeon, N. (1996). Grounded theory: Theoretical background. In J. Richardson (Ed.), Handbook of Qualitative Research Methods (pp.75-85). Leicester: BPS Books. http://dx.doi.org/10.4135/9781848608184.n28.
Plano Clark, V.L., & Creswell, J.W. (2010). Understanding Research: A Consumer's Guide. Upper Saddle River: Pearson.
Razzhavaikina, T.I. (2007). Mandatory counseling: a mixed methods study of factors that contribute to the development of the working alliance. Unpublished paper for the Doctor of Philosophy programme, University of Nebraska - Lincoln.
Sandelowski, M. (2001). Real qualitative researchers don't count: the use of numbers in qualitative research. Research in Nursing and Health, 24(3), 230-240.
Shadish, W.R., Cook, T.D., & Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.
Sieber, S. (1973). The integration of fieldwork and survey methods. American Journal of Sociology, 78(6), 1335-1359. http://dx.doi.org/10.1086/225467.
Strauss, A. (1987). Qualitative Analysis for Social Scientists. Cambridge: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511557842.
Tabachnick, B.G., & Fidell, L.S. (2013). Using Multivariate Statistics (6th ed.). Upper Saddle River: Pearson.
Tashakkori, A., & Teddlie, C. (2003). Handbook of Mixed Methods in Social and Behavioral Research. Thousand Oaks: SAGE.
Trochim, W. (2006). Positivism & post-positivism. Research Methods Knowledge Base. Retrieved January 6, 2016, from http://www.socialresearchmethods.net/kb/positvsm.php
Webb, E.J., Campbell, D.T., Schwartz, R.D., Sechrest, L., & Grove, J.B. (1981). Nonreactive Measures in the Social Sciences (2nd ed.). Boston: Houghton Mifflin.
Weiss, N.A. (2012). Introductory Statistics (9th ed.). Boston: Addison-Wesley/Pearson.
Willis, J.W. (2007). Foundations of Qualitative Research: Interpretive and Critical Approaches. Thousand Oaks: SAGE. http://dx.doi.org/10.4135/9781452230108.
Wong, S. (2008, November). An evaluation of pre-university students' performance studying on-line education in an introductory information technology course. Paper Presented at the International Conference of Education, Research and Innovation, Madrid, Spain.
Wong, S. (2012). Factors Influencing On-Line Learning: A Study Using Mixed Methods in A Hong Kong Higher Education Institution. Saarbrücken, Germany: LAMBERT Academic Publishing.
Wong, S. (2015). Exploring the relation of students' language proficiency, online instructor guidance and online collaboration with their learning in Hong Kong bilingual cyber education. International Journal of Cyber Society and Education, 8(2), 115-132. http://dx.doi.org/10.7903/ijcse.1407.
Simon Wong
The Hong Kong Polytechnic University (Hung Hom Bay)
8 Hung Lok Road, Hung Hom, Kowloon, Hong Kong
Paul Cooper
Brunel University London
Halsbury Building 103, Kingston Lane, Uxbridge, Middlesex UB8 3PH, United Kingdom
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright Academy of Taiwan Information Systems Research Dec 2016
Abstract
This paper presents the study's mixed methods designed to explore how students' English proficiency, online instructor guidance and online collaboration influence the students' online learning in an introductory information technology course offered by Hong Kong's cyber higher education. The design of mixed methods used in this study mainly adopted Creswell and Plano Clark's (2011) follow-up explanations variant of the explanatory sequential design of mixed methods. The reliability and validity of research, which have been discussed in both quantitative research and qualitative research, have to be reconsidered for mixed methods in order to reflect the multiple methods of establishing the research's trustworthiness. This paper discusses these issues. So far, the major findings of this research using the mixed methods are presented in Wong's (2015) quantitative phase, which revealed that students' English proficiency exhibits the largest effect while online instructor guidance and online collaboration have a moderate effect on the students' learning in cyber education. These quantitative findings provide implications for the qualitative phase of the mixed methods as a follow-up to explore the cause-effect relationship between the variables and the students' learning in cyber education.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer