Summary

The use of educational technology to support face-to-face education continues to be an important priority for government, politicians and policymakers, especially in terms of national efforts to steer towards ‘new’ and ‘improved’ education and learning (Ministerie van onderwijs, cultuur en wetenschappen, 2015; Obama, 2011). The potential of this so called blended learning is well described in the literature, however, current approaches of the use of blended learning are not the creative, connected and collective forms of learning that are often described in literature. Current approaches of blended learning within universities are related to logistics, providing students with additional resources that can be accessed at the student’s own pace to preface or complement face-to-face learning (Henderson, Finger, & Selwyn, 2016). This dissertation focuses on the current approach where the focus of blended learning is to preface or complement face-to-face learning.

Within the design and employment of blended learning both lectures and students struggle to find how to use technologies in a way that enhances teaching and learning. Universities struggle to define specific affordances of these technologies to benefit teaching and students are not natively aware how to use these technologies to support their learning

One of the advantages of blended learning is that the learning activities take place in an online environment, which easily generates data about these online activities. The methods and tools that aim to collect, analyze and report learner-related educational data, for the purpose of informing evidence-based educational design decisions is referred to as learning analytics. The application of learning analytics, combined with existing educational research about how internal conditions and external conditions influence learning, could identify causes for the struggle that students and universities face in making ‘good’ use of educational technology, thereby achieving better utilization and application of blended learning in the future. The aim of this dissertation is to determine which factors either facilitate or hinder effective learning for students while they interact in a hybrid or blended learning environment. These factors could be related to the instructional conditions of the course (external conditions), or based on student characteristics (internal conditions). The research reported in this dissertation aims to identify these factors by addressing the central research question: which student characteristics and external conditions when utilizing blended learning could explain the degree of adoption and effectiveness that blended learning has on course performance?

To answer this research question, data from two universities’ blended learning courses were collected. During a course on Contract Law at the Faculty of Law, data about lecture attendance, use of recorded lectures, use of short essay questions, number of formative assessments completed, total time spent in the Learning Management System (LMS) and downloaded PowerPoints were collected. Moreover, at the start of the course, the Motivated Strategies for Learning Questionnaire (MSLQ) (Pintrich, Smith, Garcia, & Mckeachie, 1993).) was administered. During a course on Biological Psychology at the Faculty of Social and Behavioural Sciences, data about lecture attendance, the use of recorded lectures, score on formative assessments, and hits and total time spent in the LMS were collected. Moreover, at the start of the course students filled out the Inventory Learning Style questionnaire (Vermunt, 1992). This questionnaire was only used to assess the regulation of the learning process. The results are described in Chapters 2 through 6, where Chapters 2 and 3 focus on the external conditions to identify key risk factors or key success factors of blended learning at the software application level. Chapters 4, 5 and 6 focus on internal conditions that facilitate or hinder effective learning for students while they interact in a hybrid or blended learning environment.

Chapter 2 discusses a systematic review which tries to identify the different instructional conditions under which the recorded lectures were used by students and explored if differences in distinctive elements of the course, expressed in disciplinary, contextual and course specific elements, had an impact on the use of recorded lectures and subsequently influenced course performance. Only fourteen experimental studies met the predetermined eligibility criteria. The results indicated that no influence of either disciplinary, contextual and course specific factors, could be determined. Four studies reported a positive relation between the deployment of recorded lectures and course performance. These four studies all used a non-random sample of the cohort of students, due to informed consent issues. The possibility exists that students with previous (positive) experiences with recorded lectures were more likely to consent towards the research, thus influencing the results. The majority of the studies reported a decline in lecture attendance, although studies from the field of medicine reported this drop in attendance less frequently. The drop in attendance to the face-to-face lecture did not reflect in lower course performance. However, the studies in the review lack a focus on actual use of the recorded lectures and most studies in the systematic review confound deployment of recorded lectures with the actual use of the recorded lectures.

Chapter 3 focuses on the actual use of recorded lectures instead of focusing merely on the availability of said recordings within the course. For this aim the attendance at the face-to-face lectures were monitored on an individual level, combined with the actual use of the corresponding recorded lecture. The specific course consisted of two assessments. The first assessment covered the first four weeks and had a focus on assessing the knowledge domain. The second assessment covered the following four weeks and had a focus on assessing higher order thinking skills. Students were assigned to four different groups based on their usage of the recorded lectures and/or lecture attendance: non-users, viewers, visitors and supplementers. A linear regression analysis explored how time spent on lectures (either online or face-to-face) contributed to course performance on either assessment. The results show a significant difference in the form of instruction on course performance in the assessment of the knowledge base, but not in that of higher order thinking skills. The results indicated that only attending face-to-face lectures explains 21% of the variance for the first assessment, but shows a decline to 7% for the second assessment. The predictive value of only watching the recordings of the lectures is more stable, but modest; 4% for the first assessment and 3% for the second assessment. Recorded lectures seem to have an added value for learning objectives that deal with developing a knowledge base, but only if these recordings are used sparingly and with restraint. For learning objectives that deal with higher order thinking skills, recorded lectures seem to offer less value, though neither do face-to-face lectures.

Chapter 4 focuses on individual differences in the use of learning resources by conducting a cluster analysis based on the use of these learning resources, including lecture attendance, throughout the course. These clusters were then aligned with data about regulation of the learning process, to determine if and how differences in the use of learning resources reflect in differences of regulation of the learning process. A four cluster solution emerged which is in line with previous research (Lust et al., 2013a; Lust et al., 2013b; Kovanović et al., 2015b). First there is a cluster where recorded lectures are used as a substitute for lecture attendance. A slight above average use of the other digital learning resources can be seen within this group. This cluster is referred to as content-focused intensive users (Kovanović et al., 2015b). Second there is a cluster with a clear preference for attending face-to-face lectures. Its users also show a slight above average use of the other digital learning resources, besides the recordings of the lectures. This cluster is referred to as socially-focused intensive users (Kovanović et al., 2015b), since these students use the learning resources with a special focus on the social aspects: lecture attendance. Third there is a cluster whose users do not use lectures- neither face-to-face nor online- but who do show an average use of the other learning resources. This cluster is labelled as task focused selective users (Lust et al., 2013b, Kovanović et al., 2015b). Finally, there is a cluster in which students hardly use any of the learning resources. In line with previous research this cluster is labelled as no-users (Lust et al., 2013b, Kovanović et al., 2015b). The differences in the use of learning resources do reflect in course performance in which content-focused intensive users and socially-focused intensive users scored significant higher compared to the other two clusters.

Chapter 5 examined if regulation strategies determines the use of different learning resources. It describes a cluster analysis procedure based on differences in regulating the learning process. These clusters were then aligned with use of the different learning resources, to determine if and how differences in regulation of learning reflect in differences in the use of the different learning resources. The cluster analysis showed three distinct patterns in the way students regulate their learning. One third of the students is mainly able to self-regulate their own learning; one third of the students use an external source to regulate their learning; one third of the students have no clear pattern when regulating their learning process; they switch between self-regulation, external regulation and sometimes show a lack of regulation during the learning process.

When adding data about the use of the learning resources to the three clusters, the three different clusters show no significant differences in the use of the learning resources; all three clusters use the learning resources to the same extent with regard to frequency and duration of that use. However, there are differences in how this use impacts course performance. For students with an external regulation strategy, 23% of the variability in course performance comes as a result of the use of the different learning resources, while for self-regulated students this variability is 50%.

Chapter 6 describes the development of a general prediction model in order to achieve a more general view of the important intertwined relationship of different external and internal variables in the study: the use of the learning resources, regulation of the learning process, and course performance. It aims to provide insight into the intertwined relationships between the use of the LMS and its tools, regulation of the learning process and its direct or indirect effects on course performance. By establishing two path models this research aims to clarify current issues of scalability and transferability of current learning analytics research results across domains and populations. The findings show that the relationship between the use of learning resources and course performance is not as straightforward as currently assumed in most learning analytics research. Moreover, it shows that regulation strategies indirectly influence the use of learning resources. The final path analysis based on the law course shows that a limited amount of variables with a direct impact on course performance. The final path analysis based on the psychology course shows more variables with a direct impact on course performance. The strongest association is between score on the formative assessments and the final grade of the course. This finding is consistent with the path model for the law course

Conclusion

The aim of this dissertation was to determine which factors either facilitate or hinder effective learning for students as they interact in a hybrid or blended learning environment. These factors could be related to the instructional conditions of the course (external conditions), or they could be based on student characteristics (internal conditions). When considering the external conditions on a software application level by comparing previous course offerings through a systematic review of the literature with a specific aim at recorded lectures, the results are inconclusive. These inconclusive results are caused by the inability to properly compare the different studies due to differences in sampling of the population, data collection and analysis. Moreover, the studies lack a description of the contextual differences and influences, including covariates. Therefore, no general capabilities at the software application level for recorded lectures could be determined when considering disciplinary, contextual and course specific elements. Despite these challenges, it is possible to identify key risk variables and success variables given their instructional aims. For recorded lectures these variables consist of the negative relationship between lecture attendance and the use of recorded lectures, and the large individual differences in the use of recorded lectures.

Current research confirms the previously reported differences in the use of learning resources, with subsequent differences in course performance. Besides, even when students use the learning resources to the same extent, students differ in their approach towards learning, which eventually influences course performance, expressed in differences in explained variance of the use of the learning resources in relation to course performance. The ability to self-regulate learning is reflected in the use of the learning resources, yet it is not a determinant of that specific use. However, self-regulated learners benefit more from the use of the learning resources by showing a greater explained variance of that use on course performance compared to students which are less capable of self-regulating their learning. Contrary to expectations, no significant differences in course performance could be established. This lack of significant differences is caused by the current employment of educational technology in a blended learning environment. Current applications have a focus on enhancing teaching, not learning, and provides students with additional resources to preface or supplement teaching. Students who are able to self-regulate their learning will underperform within these courses, although in literature, self-regulation of learning is associated with academic success. The current course design hinders effective use of blended learning and, more specifically, the technology used within those courses. In order to design more effective blended learning, greater attention need be paid to ways to support and elicit self-regulated learning. Trace data, generated in these environments, could support students and teachers in monitoring the regulation process, rather than mirroring a class average of frequency and duration of the use of blended learning technologies.

References

Henderson, M., Finger, G., & Selwyn, N. (2016). What’s used and what’s useful? Exploring digital technology use (s) among taught postgraduate students. Active Learning in Higher Education.

Kovanović, V., Gašević, D., Joksimović, S., Hatala, M., & Adesope, O. (2015). Analytics of communities of inquiry: Effects of learning technology use on cognitive presence in asynchronous online discussions. The Internet and Higher Education, 27, 74-89.

Lust, G., Elen, J., & Clarebout, G. (2013a). Regulation of tool-use within a blended course: Student differences and performance effects. Computers & Education, 60(1), 385-395.

Lust, G., Elen, J., & Clarebout, G. (2013b). Students’ tool-use within a web enhanced course: Explanatory mechanisms of students’ tool-use pattern. Computers in Human Behavior, 29(5).

Pintrich, P., Smith, D., Garcia, T. and Mckeachie, W. (1993). Reliability and Predictive Validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801-813.

Vermunt, J. D. H. M. (1992) Leerstijlen en sturen van leerprocessen in het hoger onderwijs: naar procesgerichte instructie in zelfstanding denken [Learning styles and regulation of learning in higher education: towards process-oriented instruction in autonomous thinking] (Amsterdam, Lisse: Swets & Zeitlinger).