Generally, each new medium seems to attract its own set of advocates who make claims for improved learning and stimulate research questions which are similar to those asked about the previously popular medium (Clark, 1983, p.447).
It was almost 35 years ago that Richard Clark wrote his restatement of the claim that there were “no differences [to be] expected” when learning with the use of media as compared to the traditional face-to-face teaching methods (Clark, 1983). By combining several meta-analyses, Clark concludes that there are no learning benefits to be gained from employing any media to deliver instruction. In an attempt to redefine this debate a decade later, Kozma (1994) advocates a refocus on the conditions under which media might influence learning by concentrating on particular students, learning tasks and/or situations. In the meantime, learning with the use of media has transformed into learning using technology, yet a recent report published by the Organisation for Economic Co-operation and Development (OECD, 2015) concludes that technologies are not widely adopted in formal education, and the impact of technology on education is mixed, at best. The real contributions technology can make to teaching and learning have yet to be fully realized and explored.
The theoretical potential of technology in education to enhance teaching and learning is well described in literature, where it is hailed as a modernization of education, making subject matter more attractive, leading to enhancements in student satisfaction, creating increased enrollments and income, and even achieving improvements in academic performance (Owston, 2013; Poon, 2013; Stockwell, Stockwell, Cennamo, & Jiang, 2015). However, research on actual utilization of educational technologies within hybrid / blended or fully online courses, show little or no effect on course performance when compared to offline courses (OECD, 2015; Sitzmann, Kraiger, Stewart, & Wisher, 2006; Spanjers, Könings, Leppink, & Van Merriënboer, 2014; Zhao, Lei, Yan, Lai, & Tan, 2005), creating the ‘no significant difference’ phenomenon (Russel, 1997). Despite this lack of positive results to reinforce the importance of the use of educational technology, it continues to be an important priority for government, politicians and policymakers, especially in terms of national efforts to steer towards ‘new’ and ‘improved’ education and learning (Ministerie van onderwijs, cultuur en wetenschappen, 2015; Obama, 2011). The main reason for this divide to persists is the difference between the theoretical utilization and application that could be used in education, versus the actual utilization and application of technology in education to enhance teaching and learning (Selwyn, 2013). This difference is nicely illustrated in a recently published report about the actual utilization and employment of educational technology by universities and lecturers (Voogt, Sligte, Van den Beemt, Van Braak, & Aesaert, 2016). Their survey amongst lecturers revealed that, when employing technology in their education, lecturers are convinced of the impact that educational technology can have on enhancing teaching and learning. However, this belief is mainly based on experience with a certain technology, rather than on qualitative measures or specific affordances of said technology, causing a suboptimal utilization and application of educational technology.
Beside lecturers, students, too, struggle to find how to use educational technology in a way that enhances teaching and learning. When students enter the university they are often considered to be ‘digital natives’ (Prensky, 2001). Although the existence of students being ‘digital natives’ has been condemned (Kirschner & Van Merriënboer, 2013), the belief remains that students nowadays are more digitally adept, which should have led to a change in the way they interact with and respond to digital devices, the way they think and the way they process information. As such, and despite the critique on the ‘digital native’, there is still the assumption that students are able to use digital technologies throughout all aspects of their university studies in an effective way, so that it will enhance the quality of learning and increase course performance. However, it is important to recognize the difficulties that both universities and students face in making ‘good’ use of educational technology (Henderson, Selwyn, & Aston, 2015).
Despite the gulf between the theoretical potential and the struggle both lecturers and students face in daily practice to make good use of educational technology, the technology itself has expanded rapidly. The use of internet, computers, mobile devices and learning management systems (LMSs) is daily practice at universities (Siemens & Long, 2011), despite the lack of proof for these technologies to lead to enhancement in course performance or enhancements in the quality of teaching and learning. The current development aims at implementing these known technologies in a form of blended learning (Johnson et al., 2016). The term blended learning is not clearly defined and can relate to combinations of instructional methods (e.g. discussions, recorded lectures, simulations, serious games or small workgroups), different pedagogical approaches (e.g. cognitivism, connectivism), various educational transfer methods (online and offline) or it can relate to various technologies used (e.g. e-learning, podcasts or short video lectures) (Bliuc, Goodyear, & Ellis, 2007). The common distinction lies in the two different methods used in the learning environment: face-to-face versus online learning resources. In a more explicit definition, blended learning describes learning activities that involve a systematic combination of co-present (face-to-face) interactions and technology-mediated interactions between students, teachers and learning resources (Bliuc et al., 2007). Blended learning is often associated with student-oriented learning, in which students have varying degrees of control over their own learning process. Blended learning could contribute to the autonomy of the students, as it enables them to have more control over their learning path and this autonomy should encourage students to take responsibility for their own learning process (Lust, Elen, & Clarebout, 2013). Current approaches of blended learning within universities are related to logistics, providing students with additional resources that can be accessed at the students’ own pace to preface or complement face-to-face learning (Henderson, Finger, & Selwyn, 2016). These approaches are not the creative, connected and collective forms of learning that are often described in literature. This dissertation focuses on the current approach where the focus of blended learning is to preface or complement face-to-face learning.
One of the most basic forms to complement face-to-face learning is the deployment of recorded lectures. These recorded lectures mostly comprise integral recordings of face-to-face lectures which are made available as a supplement to students directly after the lecture. In these cases, face-to-face lectures are often not mandatory. State of the art technology makes recording lectures ongoing business and the recordings are subsequently embedded in many institutions. However, the impact of the usage of recorded lectures on exam performance is not clear. Moreover, research in blended learning environments shows that students do not interact with learning technologies in the same way (Inglis, Palipana, Trenholm, & Ward, 2011; Lust, Vandewaetere, Ceulemans, Elen, & Clarebout, 2011), and the same is the case for interaction with recorded lectures. There is little insight into why students do or do not use certain learning technologies and what the consequences of these (un)conscious choices are in relation to a student’s course performance, although research suggests that goal-orientation (Lust et al., 2013), and different approaches to learning (Ellis, Goodyear, Calvo, & Prosser, 2008), may be an important predictor of frequency and engagement of use of the technologies in the educational environment.
One of the advantages of blended learning is that the learning activities take place in an online environment, which easily generates data about these online activities. The methods and tools that aim to collect, analyse and report learner-related educational data, for the purpose of informing evidence-based educational design decisions is referred to as learning analytics (Siemens & Long, 2011). Learning analytics measures variables such as total time online, number of online sessions or hits in the learning management systems (LMS) as a reflection of student effort, student engagement and participation (Zacharis, 2015). Interaction with these technologies creates learner produced data trails—trace data—of the use of these technologies. Although initially considered a byproduct, the current transition is into explicitly capturing these data, since analyzing these data provides important and timely information on how students interact with the technology. Moreover, it presents insight into temporality aspects of the learning process. The interpretations of these data trails can be used to improve the understanding of teaching and learning, to predict course performance and can be used to inform support interventions (Gašević, Dawson, Rogers, & Gasevic, 2016). However, up to the present time the application of learner data analysis has had a strong focus on predicting course performance and student retention, but these predictions lack scalability across courses, domains and contexts, and hardly provide insight into the actual learning process (Gašević, Dawson, & Siemens, 2015).
To improve the understanding of teaching and learning, learning analytics can be used to model the dynamic process of learning. However, this modeling of the learning process should build on existing knowledge about the learning process and account for both external and internal conditions for learning, as well (Gašević et al., 2015). Considering the external conditions is regarded important since these conditions –amongst others, the instructional design of a course, or a students’ previous history with the use of particular learning tool– can change the interpretation of learning data analysis and the results of the research findings. For example, a fully online course will use educational technology more intensely compared to a blended learning course, wherein face-to-face contact still plays a substantial role. Moreover, considering external conditions could provide valuable information for universities struggling with making ‘good’ use of educational technology. The focus on external conditions should target distinctive elements in a course, such as differences in how and the extent to which different tools are utilized within a course (Gašević et al., 2015). In order to do so, generic capabilities at the software application level should be determined, rather than application at the variable level. The focus at the software application level should analyze the software at a course level and explain key risk variables of the software given their instructional aims.
Also, the internal conditions for learning with educational technology are yet to be fully understood in relation to the collection of and measurement from trace data. Considering these internal conditions, such as the previously mentioned achievement goal orientation or cognitive load, could support students in making ‘good’ use of educational technology.
To summarize: the application of learning data analysis, combined with existing educational research about how internal and external conditions influence learning, could identify causes for the struggle that universities and students face in making ‘good’ use of educational technology, thereby achieving better utilization and application of educational technology in the future.
Aim of the dissertation
The aim of this dissertation is to determine which factors either facilitate or hinder effective learning for students while they interact in a hybrid or blended learning environment. These factors could be related to the instructional conditions of the course (external conditions), or based on student characteristics (internal conditions). This dissertation uses learning analytics to determine these factors and combines the results with existing educational research about internal and external conditions that influences learning.
The research reported in this dissertation aims to identify these factors by addressing the central research question: which student characteristics and usage patterns when utilizing blended learning could explain the degree of adoption and effectiveness that blended learning has on course performance?
Outline of the dissertation
This dissertation contains seven chapters. The next two chapters, Chapters 2 and 3, focus on external conditions, with the aim to determine generic capabilities at the software application level. Chapter 2 presents a systematic review of the literature based on empirical studies on the use of lecture capturing. Through a review of recent literature, this paper explores the added value recorded lectures (might) have on course performance. It tries to determine to what extent recorded lectures are associated with improved outcomes on course performance. Moreover, it tries to determine which instructional conditions of the course (external conditions), like subject field, duration of the course, duration of the lecture, and class size, are associated with improved outcomes on course performance.
Chapter 3 addresses the first empirical study on the use of recorded lectures in an authentic setting. It specifically focuses on the relationship between lecture attendance and the use of the recorded lecture and aims to provide insight into the added value that recorded lectures have on academic achievement. It investigates the individual use of recorded lectures and its relationship with actual lecture attendance, taking into account the different learning objectives for the course. It tries to determine to what extent students make voluntary use of face-to-face or recorded lectures and if there is a difference in course performance between different usage patterns and different learning objectives (knowledge base vs. higher order thinking skills). Moreover, it determines if time on task has an effect on exam performance and if this time on task contributes to course performance when assessing the knowledge base, and on an exam assessing higher order thinking skills.
To answer these research questions, data of 396 students’ use of recorded lectures was paired with data of their attendance at the face-to-face lectures for a course on Biological Psychology. These data were matched with the two summative assessment scores students could obtain during the course. The first assessment covered the first part of the course and had a focus on assessing the knowledge domain. The second assessment covered the second part of the course and had a focus on assessing higher order thinking skills.
The next three chapters focus on the internal conditions for learning, to enhance our insight into teaching and learning processes. Chapter 4 elaborates on individual differences between students and their use of (digital) learning resources. The study explores how students use different learning resources, online and offline, throughout the course in a blended learning setting by determining ‘user profiles’ through the means of a cluster analysis. These clusters are based on data from the use of various learning resources: lecture attendance, use of recorded lectures, the use of formative assessments and the resulting scores, and basic LMS data about duration of LMS-use and number of hits within the LMS. Moreover, it tries to explain these differences in the use of learning resources by combining data of the use of the learning resources with data about the internal conditions for learning, or more specifically: the regulation of the learning process. Subsequently, it determines which combinations of the use of learning resources contribute the most to course performance and what the actual impact is of these distinct user profiles on course performance.
To answer these research questions, data about the use of the different learning resources, besides that of recorded lectures, and self-reported data about regulation of the learning process were added to the existing data set.
Chapter 5 tries to determine if differences in regulation strategies cause differences in the use of learning resources. This chapter focuses on differences in regulation strategies, and analyzes how differences in regulation strategies reflect in the use of different learning resources within a blended learning course. When examining the use of different learning resources, the use of offline learning resources (face-to-face activities) was combined with the use of digital learning resources, since these two nodes of delivery are inextricably linked together in a blended learning setting.
This research first identifies which ‘regulation profiles’ can be determined based on differences in regulation strategies, through the means of cluster analysis. Next, it tries to establish if differences in regulation strategies reflect in differences in the use of (digital) learning resources and what combination of (digital) learning resources contribute most to course performance for each individual profile. Finally, it determines if differences in regulation strategies reflect in differences in course performance.
Chapter 6 describes the last study of this dissertation. The goal of this study was to establish a general prediction model in order to achieve a more general view of the important intertwined relationship of different external and internal variables in the study: the use of the learning resources, regulation of the learning process, and course performance. Often, multiple linear regression is used for prediction purposes but the current study explores these inter-related variables by subjecting them to path modeling, with course performance as the outcome variable, in an attempt to identify the decisive factors that impact academic achievement.
To establish this general prediction model, Structural Equation Modelling (SEM) was used to determine which variables have a direct influence on course performance, and more specifically, how the use of different LMS tools influences course performance. Moreover, it determined which aspects of regulation of the learning process influenced the use of different LMS tools and/or course performance.
For the purpose of generalization of the results across courses and domains, an extra data set (n=516) about regulation of the learning process, the use of different learning resources, and the impact on course performance, was collected from a course on Contract Law.
Finally, the general discussion (Chapter 7) presents an overview of the main findings and conclusions of the studies described in this dissertation. It concludes with implications for educational practice and recommendations for further research.
Bliuc, A. M., Goodyear, P., & Ellis, R. A. (2007). Research focus and methodological choices in studies into students’ experiences of blended learning in higher education. The Internet and Higher Education, 10(4), 231-244.
Clark, R.E. (1983). Reconsidering research on Learning from Media. Review of educational research, 53(4), 445-459.
Ellis, R. A., Goodyear, P., Calvo, R. A., & Prosser, M. (2008). Engineering students’ conceptions of and approaches to learning through discussions in face-to-face and online contexts. Learning and Instruction, 18(3), 267-282.
Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28, 68-84.
Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64-71.
Henderson, M., Selwyn, N., & Aston, R. (2015). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education, 1-13.
Henderson, M., Finger, G., & Selwyn, N. (2016). What’s used and what’s useful? Exploring digital technology use(s) among taught postgraduate students. Active Learning in Higher Education.
Inglis, M., Palipana, A., Trenholm, S, & Ward, J. (2011). Individual differences in students’ use of optional learning resources. Journal of Computer Assisted Learning, 27(6), 490-502.
Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A., & Hall, C. (2016). NMC Horizon Report: 2016 Higher Education Edition. Austin, Texas: The New Media Consortium.
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational technology research and development, 42(2), 7-19.
Kirschner, P. A., & Van Merriënboer, J. J. (2013). Do learners really know best? Urban legends in education. Educational psychologist, 48(3), 169-183.
Lust, G., Vandewaetere, M., Ceulemans, E., Elen, J., & Clarebout, G. (2011). Tool-use in a blended undergraduate course: In Search of user profiles. Computers & Education, 57(3), 2135-2144.
Lust, G., Elen, J., & Clarebout, G. (2013). Students’ tool-use within a web enhanced course: Explanatory mechanisms of students’ tool-use pattern. Computers in Human Behavior, 29(5), 2013-2021.
Ministerie van onderwijs, cultuur en wetenschappen, (2015). De Waarde(n) van weten: Strategische agenda hoger onderwijs 2015-2025. The Hague, The Netherlands.
Obama, B.H. (2011). State of the Union address. Washington, DC: Office of the President.
OECD (2015). Students, Computers and Learning. Making the Connection. Paris: OECD. Retrieved June 6, 2016 from http://dx.doi.org/10.1787/9789264239555-en
Owston, R. (2013). Blended learning policy and implementation: Introduction to the special issue. The Internet and Higher Education, 18, 1-3
Poon, J. (2013). Blended learning: An institutional approach for enhancing students’ learning experiences. Journal of online learning and teaching, 9(2), 271.
Prensky, M. (2001). Digital natives, digital immigrants part 1. On the horizon, 9(5), 1-6.
Russell, T. L. (1999). The no significant difference phenomenon: A comparative research annotated bibliography on technology for distance education: As reported in 355 research reports, summaries and papers. North Carolina State University.
Selwyn, N. (2013). Distrusting educational technology: Critical questions for changing times. New York, NY, USA: Routledge.
Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web‐based and classroom instruction: A meta‐analysis. Personnel psychology, 59(3), 623-664.
Stockwell, B. R., Stockwell, M. S., Cennamo, M., & Jiang, E. (2015). Blended learning improves science education. Cell, 162(5), 933-936.
Spanjers, I.A.E., Könings, K.D., Leppink, J. & Van Merriënboer, J.J.G. (2014). Blended leren: Hype of verrijking van het onderwijs? Rapportage voor Kennisnet. Retrieved June 6, 2016 from http://www.kennisnet.nl/onderzoek/nieuws/blended-leren-hype-of-verrijking-van-het-onderwijs.
Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE review, 46(5), 30.
Voogt, J., Sligte, H.W., Beemt, A. van den, Braak, J. van, Aesaert, K. (2016). E-didactiek. Welke ict-applicaties gebruiken leraren en waarom? Amsterdam: Kohnstamm Instituut. Retrieved June 6, from http://www.kohnstamminstituut.uva.nl/rapporten/pdf/ki950.pdf.
Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, S. (2005). What makes the difference? A practical analysis of research on the effectiveness of distance education. The Teachers College Record, 107(8), 1836-1884
 This dissertation uses the term blended learning. Other known synonyms are hybrid learning, technology-mediated instruction and mixed-mode instruction.