From a theoretical perspective, blended learning benefits are often described as “student-centered, highly personalized for each learner, and more productive, as delivers dramatically better results at the same or lower cost” (Horn & Staker, 2011, p13). Research in blended learning environments show a scantiness of evidence supporting this theoretically claimed success of blended learning, and only a few significant results can be directly attributed to blended learning (Sparks, 2015). One of the advantages of blended learning is that part of the learning activities take place in an online environment, which easily generates data about how students interact with the technology in the blended learning environment (Gašević, Dawson, & Siemens, 2015). The measurement, collection, analysis and reporting of data about students and its context, with the aim of understanding and optimizing learning and the environments in which it occurs, has been described as learning analytics (Long & Siemens, 2011). Learning analytics can be used to improve our understanding of teaching and learning in a blended learning environment and possibly explain the lack of significant results. Present applications of learning analytics currently lack a focus on improving teaching and learning, but rather aim to develop predictive models to identify students at risk of failing a course. These applications of learning analytics face scalability issues across contexts and domains, since the monitoring of student activity and the subsequent identification of predictive variables that contribute to course performance must be done at a course level (Macfadyen & Dawson, 2010; Gašević, Dawson, Rogers, & Gasevic, 2016). Although determining these predictive variables that contribute to course performance can provide valuable information, it does not improve understanding of learning and teaching processes. Moreover, current approaches require a theoretical basis which could provide deeper understanding of teaching and learning processes within a blended learning environment. The theoretical basis, to ground learning analytics within existing educational research, should either focus on external conditions like instructional design (Gašević et al., 2015) or emphasize students’ internal conditions (Winne, 2011; Winne & Hadwin, 1998).
The focus on external conditions should target distinctive elements in courses, such as differences in how and the extent to which different tools are utilized within a course (Gašević et al., 2015). In order to do so, generic capabilities at the software application level should be determined, rather than the application at the variable level. The spotlight at the software application level should be on analyzing the software at a course level and explain key risk variables of the software given its instructional aims.
The focus on internal conditions for learning could explain the current mixed findings in the field of learning analytics, more specifically the mixed findings regarding the predictive value of different tools within the Learning Management System (LMS). The theorization of learning analytics research allows more meaningful interpretations of prediction models since it is known that students use different educational technologies based on either their regulation of learning (Black & Deci, 2000) or self-efficacy (Chung, Schwager, & Turner, 2002).
The current dissertation tried to determine which factors facilitate or hinder effective behavior in students while they interact in a blended learning environment. These factors could be related to the software application (external conditions), or based on student characteristics (internal conditions). This dissertation measured, collected, analyzed and reported data about students from their interaction with a variety of online and offline learning resources to determine these factors and combined the results with existing educational research about external and internal conditions that influence learning.
This chapter starts with a summary of the main findings of the studies in this dissertation. First it will concentrate on the external conditions with a focus on the software application level (Chapter 2 and Chapter 3), next it will discuss the internal conditions with a focus on regulation of the learning process (Chapter 4 and Chapter 5 and Chapter 6). After this summary the research question will be answered. Then some methodological considerations of the presented studies are discussed, followed by recommendations for future research. The chapter ends with considerations for educational practice, and implications are derived for the design and implementation of blended learning and the application of learning analytics.
When exploring the internal and external conditions that influence learning, this dissertation regards the external condition at the software application level, namely the use of recorded lectures. For the internal condition it focuses on the role of self-regulation of the learning process.
When exploring the external conditions; the aim was to seek generic capabilities at the software application level for recorded lectures, rather than at the variable identification level. The variable identification level aims to the explain the variance of minutes watched or the number of times a student clicked on the link to the recordings, and relates these findings to course performance. Establishing the added value of recorded lectures on a variable identification level would imply validation at a course level, since minutes watched or links clicked will vary from course to course. Moreover, it will show fluctuations in explained variance, due to variations in course design and instructional conditions, subject field, duration of the course and other external conditions. An approach focused on the software application level is based on previous course offerings and tries to identify key risk or success variables based on the instructional aims, specific student tool-use requirements and outcomes of previous student cohorts (Gašević et al., 2015).
A systematic review of relevant studies compared the different instructional conditions under which the recorded lectures were used by students and explored if differences in distinctive elements of the course, expressed in disciplinary, contextual and course specific elements, had an impact on the use of recorded lectures and subsequently influenced course performance. Four studies reported a positive relation between the deployment of recorded lectures and course performance. These four studies all used a non-random sample of the cohort of students, due to informed consent issues. The possibility exists that students with previous (positive) experiences with recorded lectures were more likely to consent towards the research, thus influencing the results. However, a positive attitude towards something that is of value for learning is both associated with a greater frequency of the actual use thereof (Von Konsky, Ivins, & Gribble, 2009), as well as non-predictive of its use (Lust, Elen, & Clarebout, 2012).
The systematic review revealed no generic capabilities at the software application level for recorded lectures regarding disciplinary, contextual and course specific elements. Moreover, it could not identify key risk variables or success variables based on different instructional conditions. However, the current studies in the systematic review lack a focus on actual use of the recorded lectures and most studies in the systematic review confound deployment of recorded lectures with the actual use of the recorded lectures.
A subsequent study, reported in Chapter 3, focuses on the actual use of recorded lectures instead of focusing merely on the availability of said recordings within the course. For this aim the attendance at the face-to-face lectures were monitored on an individual level, combined with the actual use of the corresponding recorded lecture. The results show a significant difference in the form of instruction on course performance. For the exam assessing the knowledge base, face-to-face lecture contribute more to course performance as compared to recorded lectures. When assessing higher order thinking skills this difference vanishes.
When focusing on time on task, the amount of time spent on lectures (either online or face-to-face), shows that an increase in time on task will only lead to a marginal increase in course performance, with a curvilinear relationship. However, time on task estimations should be interpreted with some caution since it does not measure active behavior. Count measures, in this case clicking the link to the recorded lecture, often prove to be a better fit than time on task measures (Kovanović et al., 2015a). Also, the curvilinear relationship will impact the explained variance that the use of recorded lectures has on course performance. In order to determine the relation between the use of recorded lectures and its relation to course performance and to subsequently establish generic capabilities at the software application level, other statistical techniques, besides linear regression, must be explored.
One of those novel statistical techniques to establish generic capabilities at the software application level is the use of Structural Equation Modeling (SEM). SEM attempts to estimate and test the significance of relationships, often presumed causal, between a set of variables (Cohen, Manion, & Morrison, 2011). Path analysis is a special case of SEM and it only uses observed variables while more traditional approaches of SEM also use latent variables to establish the model. Path analysis enables the testing of complex conceptual models wherein mediation analysis, the effect of one variable on another is mediated by one or more variables, and simultaneous equations, multiple predictions of multiple variables in a model, play a key-role in establishing the model (Agresti & Finlay, 1997). In Chapter 6 two path analyses of two blended learning courses were validated. These courses offered students several online learning resources, including recorded lectures, in addition to the face-to-face lectures. The path models for the different courses showed two similarities. First, the negative relation between face-to-face lectures and recorded lectures, indicating that if a student attends a lecture, it will be less likely to watch a recording of that lecture and vice versa. Second, use of the LMS, either in hits or total time in the LMS, influences course performance indirectly and is partly mediated by the use of recorded lectures. The influence of the use of recorded lectures on course performance is less straightforward, showing no effect for one blended learning course on course performance but a direct effect on course performance for the other blended learning course. This indicates that course design and course alignment have an impact on the use of recorded lectures and course performance.
Learning analytics could be used to enhance insight into teaching and learning processes and more specifically into how blended learning influences teaching and learning and how it eventually influences course performance. As Gašević et al. (2015) suggests, more general learning analytics models are needed to identify key risk variables and success variables, given their instructional aims. One approach is to seek generic capabilities at the software application level. When examining these generic capabilities for recorded lectures we used a systematic review, based on previous studies about recorded lectures, to determine these capabilities considering disciplinary, contextual and course specific elements. However, these elements could not determine any differences between the deployment of recorded lectures and course performance. The fact that these variables do not identify risk variables or success variables is partly due to the rather straightforward structure of the courses described in the studies that were used in the systematic review. In the review, the consequences of the deployment of the recorded lectures is directly linked to course performance, although courses are characterized by long learning times wherein multiple variables will influence course performance (Grabinger, Aplin, & Ponnappa-Brenner, 2008; Lust et al., 2012), for example the internal conditions of the students, such as motivation. None of the fourteen studies considers these covariates.
Can determination of generic capabilities at the software application level provide the theoretical basis that could enhance the understanding of teaching and learning in a blended learning environment? It depends; as seen from the current and previous systematic reviews (see for example Lust et al., 2012) the selected studies lack the possibilities for proper comparability due to differences between studies in sampling of the population, data collection and analysis. Moreover, they lack a description of the contextual differences and influences, including covariates. Despite these challenges, as current results show, it is possible to identify key risk and success variables given their instructional aims. For recorded lectures these variables are: the negative relationship between lecture attendance and the use of recorded lectures, the large individual differences in the use of recorded lectures, and the differences in impact on course performance indicating that the educational goals and learning resources must be well aligned in order to enhance course performance.
The second approach toward learning analytics research as a means of enhancing our insight into teaching and learning processes, is the focus on internal conditions for learning (Gašević et al., 2015). The first approach, a focus on external conditions by focusing on the software application level, revealed large individual differences in the use of recorded lectures. Previous research into blended learning environments confirms these individual differences are not only limited to recorded lectures, but apply to all digital learning resources (Inglis, Palipana, Trenholm, & Ward, 2011; Lust, Elen, & Clarebout, 2013a; Ellis, Goodyear, Calvo, & Prosser, 2008; Kovanović, Gašević, Joksimović, Hatala, & Adesope, 2015b). It is hypothesized that these differences in the use of digital learning resources have a strong relationship with regulation of the learning process (Winne, 2011; Winne & Hadwin, 1998; Black & Deci, 2000; Lust et al., 2013a, Lust, Elen, & Clarebout, 2013b). However, it is not clear if differences in regulation of the learning process determine differences in the use of digital learning resources or that differences in regulation of the learning process reflect in differences in use of the digital learning resources. And, moreover, how these internal conditions reflect in learning data analysis and afterwards, when considering these internal conditions will increase the understanding of teaching and learning in a blended learning environment.
Chapters 4 and 5 describe the relationship between regulation of learning and the use of several learning resources and vice versa. Chapter 4 focuses on individual differences in the use of learning resources by conducting a cluster analysis based on the use of these learning resources, including lecture attendance, throughout the course. These clusters were then aligned with data about regulation of the learning process, to determine if and how differences in the use of learning resources reflect in differences of regulation of the learning process.
This clustering technique shows some emerging patterns that could explain the causes for differences in the use of (digital) learning resources. Compared to the other two clusters, intensive users of the learning resources score significantly higher on the subscale lack of regulation, indicating that students in these clusters experience difficulty with regulating their learning process. They seem to use lectures –either online or face-to-face– to cope with this lack of regulation of the learning process. Moreover, socially oriented intensive users scored significantly higher on the subscale external regulation. Their tendency to depend on an external source to regulate their learning process reflects in attending to the face-to-face lectures, where they receive regulation by the lecturer or by their peers. Task focused selective users show no clear pattern when regulating their learning and therefore it seems that these students do not master the ability to sustainably regulate their learning (Weinstein, 1994). The no-users show a significant lower score on the subscales external regulation and lack of regulation compared to the other clusters. The lower score on the subscales external regulation and lack of regulation would imply that these students are abler to self-regulate their learning. However, these students show the lowest use of the learning resources. This low usage pattern could indicate that the no-users have a tendency to overestimate their own abilities to regulate their learning and this overestimation presumably leads them to decide against the use of the learning resources.
To sum up, differences in regulation strategies reflect in the use of digital learning resources, showing four distinct user profiles of these learning resources.
In order to determine if regulation of the learning process determine difference in the use learning resources another cluster analysis was conducted. Chapter 5 describes this clustering procedure based on differences in regulating the learning process. These clusters were then aligned with use of the different learning resources, to determine if and how differences in regulation of learning reflect in differences in the use of the different learning resources. The cluster analysis showed three distinct patterns in the way students regulate their learning. However, there are no significant differences in the use of the learning resources between the different clusters and subsequently no significant differences in course performance.
This notable finding is perpendicular to the expectation that students who score higher on the self-regulation scale would perform better within the course. The ability to self-regulate the learning process in an effective way is linked to academic success (Beishuizen & Steffens, 2011; Winne, 2006; Zimmerman, 1990). This finding indicates that the structure of the course is beneficiary for students who report low self-regulated learning, but is a disadvantage for students who report high self-regulated learning. This disadvantage could be caused by the expertise reversal effect (Kalyuga, Ayres, Chandler, & Sweller, 2003). The expertise reversal effect is a cognitive load framework that states that instructional techniques that are effective with inexperienced learners can lose their effectiveness when used by more experienced learners. A similar effect will also occur if novices must attempt to process very complex material, which will benefit the experienced learners. The students who report high self-regulated learning benefit more from the offered learning resources, which is reflected in a higher explained variance on course performance, even though the frequency and duration of that use are the same as the other clusters.
Do differences in regulation of the learning process determine the use of learning resources or do differences in regulation of the learning process reflect in differences in the use of learning resources? Current research confirms previous research (Inglis et al., 2011; Lust et al., 2013a; Ellis et al., 2008; Kovanović et al., 2015b) that students use digital learning resources differently during a course. Moreover, students differ in the way they intend to self-regulate their learning at the start of the course. Differences in the use of learning resources are reflected in differences in regulating the learning process. However, in the current study, differences in regulation strategies do not influence the use of the learning resources. Effective regulation of the learning process is seen as a process in which a student analyses the learning situation, sets meaningful learning goals, determines the strategies to use, assesses if the strategies are effective and evaluates the understanding of the topic (Azevedo, Moos, Greene, Winters, & Cromley, 2008). Students frequently show ineffective self-regulation and, moreover, students mistakenly believe that an ineffective strategy is a good strategy, which in itself may lead to poor regulation, even though it was reported otherwise (Bjork, Dunlosky, & Kornell, 2013). The reported courses in this dissertation are characterized by a strict course outline for every week and are not explicitly set up to enable self-regulated learning or even learning per se, but tend to enable the act of being a teacher or student (Henderson, Finger, & Selwyn, 2016). In the absence of a specific course design that enables self-regulated learning, students may decide not to plan their learning or to monitor their learning, to use effective strategies, and to generate interest to sustain the learning activity (Azevedo et al., 2008). This lack of the need to regulate the learning process in these courses is reflected in the SEM analysis in Chapter 6, wherein the significance of the relationship between variables is plotted in a conceptual model. This conceptual analysis shows a weak effect, mediated by variables from the use of the learning resources, before it has an impact on course performance.
Although students with differences in regulation strategies use the learning resources to the same extent, there are differences in how the use of a specific learning resource contributes to course performance. For example, for students who use an external source to regulate their learning, the formative assessments contribute only 8% to course performance, while for students who are able to self-regulate their learning, the formative assessments contribute 21,3% to course performance, although the average score on these formative assessments show no significant difference between the different clusters. This finding aligns with the results of Hoskins and Van Hooff (2005) who found that the degree of participation in formative assessments was not predicted by student variables as age or gender, but found a mediating role of achievement orientation when repeating the formative assessment. Students who were improving their grade when repeating the formative assessment had a lower achievement orientation than those whose marks deteriorate. The approach students takes toward the assessment influences learning when expressed in course performance. Students who use an external source to regulate their learning tend to use the formative assessments as an assessment of learning, while students who are able to self-regulate their learning approach these formative assessments as assessments for learning. Although assessment of learning is often considered as a summative assessment, students with an external regulation strategy probably approach the formative assessments as if they were summative assessments: pass or fail. Assessment for learning is formative, aiming to support and advance students in their learning. When students are introduced to new ideas and new ways of thinking, they need multiple opportunities to learn through trial and error, get feedback and self-monitor their performance (Nicol & Macfarlane‐Dick, 2006). Current research shows that only a minority of the students use the formative assessments as intended: as an assessment for learning. The majority of the students need to become aware that learning may depend less on their capacity to recognize the correct answer, but more on their ability to express and discuss their own understanding of the assessed topic (Henderson et al., 2016).
This example shows that, even when students use the learning resources to the same extent, expressed in frequency and duration of use, the approach they display while using these learning resources has a direct impact on course performance. However, students never receive explicit guidance in how to approach the learning resources, such as it is shown by the example of the formative assessments. Especially within a blended learning environment where, besides differences in approaches to learning, students show large differences in the frequency and duration of the use of learning resources, this guidance in how to use the learning resources to benefit learning and accomplish high quality of learning, seems to be more urgent.
Can considering the internal conditions provide the theoretical basis that could enhance the understanding of teaching and learning in a blended learning environment? This research confirms the previously reported (Inglis et al., 2011; Lust et al., 2013a; Ellis et al., 2008; Kovanović et al., 2015b) large variations students display in the frequency and duration of the use of the learning resources. When examining the internal conditions –in this case: differences in regulation of the learning process– that facilitate or hinder effective behavior while students interact in a blended learning environment, current research shows that differences in use of the learning resources also reflect in differences of regulation of the learning process. However, differences in regulation of the learning process do not reflect in differences in the use of learning resources in terms of duration or frequency, but it shows differences on the impact of that use on course performance. Both cluster analyses show that students who differ in the way they regulate their learning, show differences in (impact of) the use of learning resources, they show differences when related to course performance and eventually students show differences in actual course performance. When extrapolating these results towards learning analytics research, current research shows that considering these internal conditions does not only provide information about why students show differences in (impact of) the use of learning resources, but also that these internal conditions need to be strongly considered since current research shows that although all clicks are equal, some clicks are more equal than others (Orwell, 1954).
Current results lead to some practical implications for current approaches of blended learning. Current approaches of blended learning within universities are mostly related to logistics, providing students with additional learning resources that can be accessed at the student’s own pace to preface or complement face-to-face learning (Henderson et al., 2016). These current approaches are not the creative, connected and collective forms of learning often described in literature, which lead to students approaching these learning resources in terms of consumption of information and content. Despite these different approaches of blended learning, even within the current approach students are offered more control of their learning and are allowed to create blends that fit their own learning needs (Masie, 2006) with an any place, any time and any pace approach. As current research shows, students do create their own blends; however, they don’t choose wisely. For example, in Chapter 3 it is shown that students who use the recorded lectures as a supplement for attending lectures spend almost six hours of watching the lectures, while this eventually only explains 4% of the variance on course performance. Moreover, even when students use the learning resources to the same extent, their approach to learning eventually influences how the use of that specific learning resource contributes to course performance. When students interact in a blended learning environment, they need more guidance in this environment. Not only in how to combine different learning resources to enhance their learning (Lust et al., 2013b), but students also need guidance in how to approach to learning resources on a metacognitive level: analyzing the learning situation, setting meaningful learning goals, determining which strategies to use, assessing if these strategies are effective and evaluating the understanding of the topic (Azevedo et al., 2008). This guidance can be easily accomplished and will foster a more self-regulated approach to learning. For example, each week of a particular course students are offered the learning goals of that specific week, accompanied by a description of the learning resources that will be offered to the students to accomplish these learning goals. This minimal guidance allows students to determine more consciously which strategies to use to accomplish these learning goals.
The need for more guidance in a blended learning environment to benefit student learning, contradicts the notion of the current student as a ‘digital native’ (Prensky, 2001). The digital natives are, according to Prensky, current students of the digital native generation that possess sophisticated knowledge of and skills within information technologies. However, it seems clear that, although current students are equipped to use certain information technologies in an informal way in their day to day life, it does not imply these students know how to use these information technologies to benefit their learning. Moreover, current research shows that a majority of the students still have a preference for attending the face-to-face lectures, even though a digital alternative is offered to them.
The variations in explained variance of the use of the digital resources contributing to course performance, shifting between 13,2% and 43,3%, do not implicate that blended learning is ineffective. The lower explained variance could be caused by the current approach of the blended learning course, which is mainly aimed at providing students with additional learning resources. These additional resources cause students to approach learning in terms of consumption of information and content (Henderson et al., 2016), although there is evidence that provisioning of additional content does not contribute to course performance (Means, Toyama, Murphy, Bakia, & Jones, 2010). As current research hypothesizes in Chapter 6, cognitive tools have a greater impact on course performance. Cognitive knowledge and modeling tools allow students to interact and reshape the content, e.g. discussion boards or exercises (Lust et al., 2013b). These tools require a more active approach towards learning, in contrast to the basic information tools or elaborated information tools. However, students need to learn what this active behavior entails in order for this active use of formative assessment to be promoted, as described in the previous section, since students show differences in approaches to learning. When designing blended learning, the focus of the learning resources should be more on how specific tools benefit learning instead of on how specific tools benefit teaching. This focus of added value for learning, rather than for teaching, should emerge from qualitative measures or specific affordances (Voogt, Sligte, Van den Beemt, Van Braak, & Aesaert, 2016) and should not be based on previous experiences of teachers. Since students in the two courses in this dissertation have the same course outline and the same learning resources at their disposal, the latter clearly occurred in these courses despite their different context and domain.
Current research tries to increase the validity of learning analytics research, and hence contribute to improvements of our understanding of learning and teaching process, by emphasizing external conditions on the software application level (Gašević et al., 2015) or internal conditions for learning (Winne, 2011; Winne & Hadwin, 1998), in current research into self-regulation of the learning process. Results show that, when considering the external conditions on the software application level by comparing previous course offerings through a systematic review of the literature, the studies in the review lack the possibilities for proper comparability due to differences among studies in sampling of the population, data collection and analysis, and lack of a description of the contextual differences and influences, including covariates. A systematic review to determine these generic capabilities at the software application level, is not best approach. Future research should try to identify key risk variables and success variables of software deployment based on data from within its own institutions, so instructional aims and contextual variables can be accounted for. Moreover, this future research should focus on whether cognitive tools, as hypothesized in Chapter 6, contribute more to course performance than information tools do.
This dissertation shows that the students’ internal conditions improve the validity of learning analytics research. Current research confirms the previously reported differences in the use of learning resources, with subsequent differences in course performance. Moreover, even when students use the learning resources to the same extent, students differ in their approach towards learning, which eventually influences course performance expressed in differences in explained variance of the use of the learning resources in relation to course performance.
This finding has some major implications for current applications of learning analytics wherein the focus is on developing predictive models to identify students at risk of failing the course. The majority of these models are based on the class average while this class average is not as straightforward as previously assumed. Identifying students at risk must be accompanied with considerations of students’ internal conditions, since these internal conditions influence the actual use of the learning resources and the actual impact this use has on course performance. Moreover, these internal conditions can be modified and a data driven approach to accompany those students in order to adjust their approach to learning seems to be the next obvious step in learning analytics research.
Predicting course performance based on the use of learning resources within a blended learning setting, seems almost contradictory. Blended learning aims to provide students with a flexible learning environment where student autonomy and reflexivity is strengthened (Orton-Johnson, 2009). As current research shows, this flexibility will reflect in either no-use of the learning resources, or in an overuse of a certain learning resource. Besides the fact that students do not create their blends wisely to fit their actual learning need, the aim of blended learning is still to provide a more tailored sized learning environment. The use of general prediction models therefore contradicts the objective of blended learning.
General prediction models to identify students at risk of failing a course, are often accompanied by a dashboard to aid making sense of the trace data and by visualizing prediction results (Ali, Hatala, Gašević, & Jovanović, 2012). One of the best known applications of this dashboard is ‘Course Signals’ by Purdue University (Arnold & Pistilli, 2012). ‘Course Signals’ visualizes three student outcomes: a high risk of failing the course, a moderate risk of failing the course or not a risk of failing the course. More and more dashboards are currently being developed with the aim of offering students visualizations of their learning analytics results. However, if students currently lack the ability to use digital learning resources to benefit their learning and to enhance the quality of learning, we can subsequently assume that students don’t know how to ‘use’ a dashboard for it to benefit their learning. This is one more argument that this choice of mirroring and modeling should be examined critically.
Besides the specific methodological considerations mentioned in each separate chapter, the overall research is characterized by at least one general methodological consideration: the use of solely one output variable to indicate ‘learning’.
The use of summative assessment as a snapshot measure to indicate learning should entail more than solely identifying students that recognize the ‘right’ or ‘wrong’ answer (Angelo, 1999). Summative assessment should provide evidence of the learning process itself. There are more effective approaches to improve and measure learning then by using summative assessments alone; especially the processes that show development of student understanding over time (Wiliam, 2010). Current research does not consider multiple outcomes of learning as, for example, amount learning, quality of learning (e.g. critical and reflective thinking) (Garrison & Kanuka, 2004), but rather it uses a fairly straightforward outcome measure to indicate learning.
Recommendations for future research
Prediction of course performance is often based on the use of the Learning Management System (LMS). As mentioned in Chapter 6, current research into the use of the LMS and its contribution to course performance show low predictive values of frequency and duration of LMS use. Different studies report a predictive value as low as 21 % (Gašević et al., 2016), while others report an explained variance of 52% of the usage of different LMS tools on course performance (Zacharis, 2015), while others conclude that basic LMS data does not substantially predict course performance (Tempelaar et al., 2015). This research confirms this low predictive value of LMS data. As described in Chapter 5, frequency and duration of LMS use were not significant in contributing to course performance. However, of all of the aforementioned studies, considers an important aspect regarding LMS use, namely: the temporality of LMS use. The predictive value of the LMS might be enhanced when research considers the order in which students access the digital learning resources within the LMS. A data mining technique that considers this temporal aspect is sequential pattern mining. It is hypothesized that certain sequences of the use of learning resources may distinguish better performing students from lower performing students (Perera, Kay, Koprinska, Yacef, & Zaïane, 2009). Other than enhancing the predictive value of the LMS use within learning analytics research, temporal analysis of learner data could also provide instructors with valuable information to assess intended learning design(s).
Future research could elaborate on this hypothesis and moreover, determine how successful sequences stand in relation to regulation of the learning process. Self-regulation of learning is also seen as a process of temporal events that evolve during learning (Azevedo & Aleven, 2013). Strategies for regulation of the learning process involve more than mere knowledge of a strategy (Zimmerman, 1990), but involves awareness of the learning outcomes and continuous monitoring and evaluating of this meta-cognitive process. From a theoretical design perspective there are numerous effective strategies that could be scaffolded in a blended learning setting, including coordinating informational sources, drawing, mnemonics, and making inferences. A major challenge with blended learning is its current inability to detect, trace, and model effective strategies and ineffective strategies. Temporal data analysis could model and mirror these effective strategies and directing ineffective strategies (Azevedo et al., 2008). Future learning analytics research could help to develop prompts and feedback for these effective and ineffective strategies that are scalable across contexts and domains. One of the major challenges is to determine how self-regulation of the learning process is reflected in the trace data.
Agresti, A., & Finlay, B. (1997). Introduction to multivariate relationships. Statistical methods for the social sciences, Ed, 3, 356-372.
Ali, L., Hatala, M., Gašević, D., & Jovanović, J. (2012). A qualitative evaluation of evolution of a learning analytics tool. Computers & Education, 58(1), 470-489.
Angelo, T. A. (1999). Doing assessment as if learning matters most. AAHE Bulletin, 51(9), 3-6.
Arnold, K. E., & Pistilli, M. D. (2012, April). Course signals at Purdue: using learning analytics to increase student success. In Proceedings of the 2nd international conference on learning analytics and knowledge (pp. 267-270). ACM.
Azevedo, R., Moos, D. C., Greene, J. A., Winters, F. I., & Cromley, J. G. (2008). Why is externally-facilitated regulated learning more effective than self-regulated learning with hypermedia?. Educational Technology Research and Development, 56(1), 45-72.
Azevedo, R., & Aleven, V. (2013). Metacognition and learning technologies: an overview of current interdisciplinary research. In R. Azevedo & V. Aleven (Eds.), International handbook of metacognition and learning technologies (pp. 1-16). New York, NY, USA; Springer.
Beishuizen, J., & Steffens, K. (2011). A conceptual framework for research on self-regulated learning. In R. Carneiro, P. Lefrere, K. Steffens, & J. Underwood (Eds.), Self-Regulated Learning in Technology Enhanced Learning Environments (pp. 3-19). Rotterdam, The Netherlands: SensePublishers.
Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual review of psychology, 64, 417-444.
Black, A. E., & Deci, E. L. (2000). The effects of instructors’ autonomy support and students’ autonomous motivation on learning organic chemistry: A self‐determination theory perspective. Science education, 84(6), 740-756.
Chung, S. H., Schwager, P. H., & Turner, D. E. (2002). An empirical study of students’ computer self-efficacy: Differences among four academic disciplines at a large university. Journal of Computer Information Systems, 42(4), 1-6.
Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education. Milton Park. Abingdon, Oxon, UK: Routledge.
Ellis, R. A., Goodyear, P., Calvo, R. A., & Prosser, M. (2008). Engineering students’ conceptions of and approaches to learning through discussions in face-to-face and online contexts. Learning and Instruction, 18(3), 267-282.
Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The internet and higher education, 7(2), 95-105.
Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64-71.
Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28, 68-84.
Grabinger, R. S., Aplin, C., & Ponnappa-Brenner, G. (2008). Supporting learners with cognitive impairments in online environments. TechTrends, 52(1), 63-69.
Henderson, M., Finger, G., & Selwyn, N. (2016). What’s used and what’s useful? Exploring digital technology use (s) among taught postgraduate students. Active Learning in Higher Education.
Horn, M. B., & Staker, H. C. (2011). The rise of K–12 blended learning. San Mateo, CA: Innosight Institute, Inc. Retrieved from http://www.innosightinstitute.org/innosight/wp-content/uploads/2011/01/The-Rise-of-K-12-Blended-Learning.pdf
Hoskins, S. L., & Van Hooff, J. C. (2005). Motivation and ability: which students use online learning and what influence does it have on their achievement? British journal of educational technology, 36(2), 177-192.
Inglis, M., Palipana, A., Trenholm, S., & Ward, J. (2011). Individual differences in students’ use of optional learning resources. Journal of Computer Assisted Learning, 27(6), 490-502.
Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational psychologist, 38(1), 23-31.
Kovanović, V., Gašević, D., Dawson, S., Joksimović, S., Baker, R. S., & Hatala, M. (2015a). Penetrating the black box of time-on-task estimation. Proceedings of the Fifth International Conference on Learning Analytics And Knowledge (pp. 184-193). ACM.
Kovanović, V., Gašević, D., Joksimović, S., Hatala, M., & Adesope, O. (2015b). Analytics of communities of inquiry: Effects of learning technology use on cognitive presence in asynchronous online discussions. The Internet and Higher Education, 27, 74-89.
Long, P., & Siemens, G. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE review, 46(5), 30.
Lust, G., Collazo, N. A. J., Elen, J., & Clarebout, G. (2012). Content Management Systems: Enriched learning opportunities for all?. Computers in Human Behavior, 28(3), 795-808.
Lust, G., Elen, J., & Clarebout, G. (2013a). Students’ tool-use within a web enhanced course: Explanatory mechanisms of students’ tool-use pattern. Computers in Human Behavior, 29(5).
Lust, G., Elen, J., & Clarebout, G. (2013b). Regulation of tool-use within a blended course: Student differences and performance effects. Computers & Education, 60(1), 385-395.
Macfadyen, L. P., & Dawson, S. (2010). Mining LMS data to develop an “early warning system” for educators: A proof of concept. Computers & education, 54(2), 588-599.
Masie, E. (2006). The blended learning imperative. In C.J. Bonk & C.R. Craham (Eds.), The handbook of blended learning: Global perspectives, local designs (pp. 22-26). San Francisco, CA, USA: Pfeiffer.
Means, B., Toyama, Y., Murphy, R., & Bakia, M. & Jones, K. (2010). Evaluation of evidence-based practices in online Learning: A meta-analysis and review of online learning studies. Washington, D. C: US Department of Education.
Nicol, D. J., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in higher education, 31(2), 199-218.
Orton‐Johnson, K. (2009). ‘I’ve stuck to the path I’m afraid’: exploring student non‐use of blended learning. British Journal of Educational Technology, 40(5), 837-847.
Orwell, G. (1954). Animal Farm: A Fairy Story. London: Secker and Warburg.
Perera, D., Kay, J., Koprinska, I., Yacef, K., & Zaïane, O. R. (2009). Clustering and sequential pattern mining of online collaborative learning data. IEEE Transactions on Knowledge and Data Engineering, 21(6), 759-772.
Prensky, M. (2001). Digital natives, digital immigrants part 1. On the horizon,9(5), 1-6.
Sparks, S. D. (2015). Blended Learning Research Yields Limited Results. Education Week, 34(27), 12-14.
Tempelaar, D. T., Rienties, B., & Giesbers, B. (2015). In search for the most informative data for feedback generation: Learning Analytics in a data-rich context. Computers in Human Behavior, 47, 157-167.
Von Konsky, B. R., Ivins, J., & Gribble, S. J. (2009). Lecture attendance and web based lecture technologies: A comparison of student perceptions and usage patterns. Australasian Journal of Educational Technology, 25(4), 581-595.
Voogt, J., Sligte, H.W., Beemt, A. van den, Braak, J. van, Aesaert, K. (2016). E-didactiek. Welke ict-applicaties gebruiken leraren en waarom? Amsterdam: Kohnstamm Instituut. Retrieved June 6, from http://www.kohnstamminstituut.uva.nl/rapporten/pdf/ki950.pdf.
Weinstein, C. E. (1994). Students at risk for academic failure: Learning to learn classes. In K. W. Prichard & R. M. Sawyer (Eds). Handbook of college teaching: Theory and applications. The Greenwood educators’ reference collection. (pp. 375-385). Westport, CT, US: Greenwood Press/Greenwood Publishing Group.
Wiliam, D. (2010). The role of formative assessment in effective learning environments. In H. Dumont, D. Istance, & F. Benavides (Eds.), The nature of learning: Using research to inspire practice, (pp.135-155). Paris, France: OECD Publishing.
Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. Metacognition in educational theory and practice, 93, 27-30.
Winne, P. H. (2006). How software technologies can improve research on learning and bolster school reform. Educational Psychologist, 41(1), 5-17.
Winne, P. H. (2011). A cognitive and metacognitive analysis of self-regulated learning. In B.J. Zimmerman & D.H. Schunk (Eds.) Handbook of self-regulation of learning and performance (p. 15-32). New York, NY, US: Routledge.
Zacharis, N. Z. (2015). A multivariate approach to predicting student outcomes in web-enabled blended learning courses. The Internet and Higher Education, 27, 44-53.
Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational psychologist, 25(1), 3-17.