Cognification in Education: Emerging Trends

Recently, opportunities for digital and online learning have dramatically increased. Learners around the world now have digital access to a wide array of Open Education Resources (OERs), Massive Open Online Courses (MCCs), corporate trainings, certifications, comprehensive academic degree programs, and other educational and training options. Some organizations are blending traditional instruction methods with online delivery. Blended learning generates large volumes of data related to both the content (quality and usage) and the learners (study habits and learning outcomes). Correspondingly, the need to properly process voluminous, continuous, and often disparate digital data has prompted the advent of cognification. Cognification techniques design complex data analytic models that allow natural intelligence to engage artificial smartness in ways that can enhance the learning experience. Cognification is the approach to make something increasingly, ethically, and regulatably smarter. This paper highlights how emerging trends in cognification could disrupt online education and blended learning to benefit the learner.


Introduction
The agricultural revolution and three subsequent industrial revolutions, aided by advancing communication channels, enabled societies to transform. Their paces of adoption allowed societies to accommodate the disruptive changes that the innovations brought. The agricultural revolution, the slowest one and a precursor to the industrial revolutions, spread as foragers started to adapt the domestication of animals and associated farming methods. The first industrial revolution introduced the use of steam power and the mechanisation of goods production and spread at a pace afforded mainly by roads and railroads. The second industrial revolution brought electrification and assembly-line mass production of goods to societies at a pace associated with electrical connectivity networking.
The third industrial revolution electronified and computerized societies at the pace of telephonic and early-internet communication. The fourth industrial revolution, currently ongoing, percolates our societies at the pace of light, afforded by the internet aided by Li-Fi (Light Fidelity) and Fibre optic networks, and with the anticipated pace of quantum computational power. The increasing pace of technology adoption across the globe puts societies-at-large at a disadvantage because associated technologies become available and used by certain communities within societies without a proper, commonly understood vetting process. Societies are struggling to grasp, adapt and accommodate the inventions of the fourth industrial revolution. Governance structures are still being conceived as an afterthought. Ethicality of the fourth industrial revolutions is still being studied while the application of the fruits of the revolution are already in the marketplace. These challenges are being felt across several industries, particularly in Education.
In recent years, higher education has noticeably felt the influence of the phenomenal growth of the 4th Industrial Revolution (4IR), aided by the associated technologies and theories that are paving new pathways for educators to conceive novel competences for learners. Nevertheless, technologically feasible paradigm shifts in higher education will still require thorough analysis of efficacy, ethicality, and merit. This paper provides one such analysis. From several points of view, it offers a synthesis to explore a possible marriage of intelligent computing and educational services in a way that properly fuses comparably smart, companion entities to everything in human learning.
Additionally, this synthesis demonstrates the centrality of data -data about the content, data about the interactions with the content, data on the learning community, and data about the learning outcomes. Rich datasets can be analyzed purposefully and ethically to enrich and personalize student learning experiences.

Cognification in Education
Several trustworthy organizations, media, and individuals have urged the world of education to explore and accommodate the fruits of the 4th Industrial Revolution, which is being realized in other sectors of society such as, business, industry, manufacturing, etcetera. They urge educators to prepare learners for a future of cognification.
Cognification is the art of making something increasingly, ethically, and regulatably smart [Kelly, 2016;Aoun, 2018]. It is a major outcome of the 4th industrial revolution (4IR) where just-in-time solutions to day-to-day tasks faced by humans will arrive on demand, like the way electricity flows instantaneously through wires to places of need. The following example of cognification illustrates the changing learning landscape under the influence of 4IR innovations.
Consider the scenario where a learner needs to conduct a literature review on a certain hypothesis. At present, it is a mundane process of collecting literature manually. The learner could either use a script to search various databases for relevant literature or collect literature from other researchers in a research group, who had performed similar manual searches in the past. Subsequently, the learner could read through the collected literature, discuss various points of views with other researchers, and eventually arrive at a synthesis. A cognified alternative to this manual process of literature review will comprise of at least the following: First, a list of relevant literature available ondemand, and scripts that are continually collecting and classifying published literature as a gateway to a global, indexed collection of literature. Thus, the learner may be left with only the manual task of picking a subset of the literature. Second, analysis on how selected literature relates to the hypothesis created on-demand. That is, relations explored and possibly causated or correlated in individual publications will be panned, collated, and connected with the proposed hypothesis. This allows the learner to manually sift through various classifications of the derived relations and manually select the ones that closely relate to the proposed hypothesis. Third, the learner could infer several conclusions and multitudes of syntheses arising from the analyses of the selected literature, on-demand. That is, the conclusions of selected literature and the rigour of these conclusions could be automatically inferred, yielding several potential syntheses. The learner could then perform a manual search through these candidate syntheses and select one or more plausible syntheses. Fourth, a gap analysis of the hypothesis and the leading-edge of the knowledge frontier could then be derived, on demand. While a synthesis does include substantiation in terms of the validity of the associated relations, the learner would be tasked to manually identify a derived synthesis or fuse a subset of candidate syntheses into a derived synthesis. That is, artificially smart technologies could supplement the manual natural intelligence of a researcher, replacing traditional labor to collect, read, comprehend, and synthesize manual with cognified literature review, which would support deeper and richer research exploration. The focus of such a cognified solution would mostly supplement the creative side of natural intelligence, preventing excessive mental overload for the learner.
Cognified literature review is automated to relieve the learner of the mundane and assist the learner to delve deeper into the solution space. Such a solution is expected to (1) increase its scope as required and improve its accuracy with more data; (2) be governed by ethical principles pertaining to that specific activity, particularly in accommodating inferences made by the automation mechanism as it derives relations, consolidates synthesis, performs analyses, and make available an open research space; (3) be regulatable by authorities, in terms of proliferation, contextualization and application, prior to unleashing them to the users.
Overall, the current trends in Artificial Intelligence and Machine Learning point to the inevitability of a 4IR-induced paradigm shift in higher education -the marriage of intelligent computing, instructional services, and learner activities. What needs our attention is the uniqueness of this revolution, different from the earlier agricultural, industrial, and digital revolutions, in fusing a comparably smart, non-living entity to everything human. Smart companion entities are increasingly becoming an integral (and compellingly necessary) part of everything we do, and the way we think and create, supplementing the very essence of humanity. A differing viewpoint might project this companion entity as essential to assist humanity to overcome several problems plaguing our global and local communities. One must study the balancing of these two viewpoints and the associated ethics and regulation.
Keeping in mind the ongoing global effort in cognification, educational institutions need to offer informed guidance to the academics, to the researchers, and to the learners to prepare them to adapt to this potential future as it percolates our societies.
Cognification in learning and teachingan example Modern AI (Artificial Intelligence) techniques that drive cognification are not perfect. Mostly, lack of quality data causes failure. Alternatively, fail can come from issues around the quality and richness of models. As the world of education becomes more evidence-based and more data-centric, the application of cognification is expected to yield more trustworthy outcomes.
Recent advances in explainable AI (xAI) could unveil the inner workings of the underlying AI techniques. Teachers and learners can already access such open AI models -for instance, learners could revise their progress based on xAI feedback prior to submitting assignments and teachers could subject learner responses to automatic evaluation of targeted rubrics. xAI techniques could potentially learn from their own explanations, thus leading to the possibility of an autonomous self-improving system. However, at this time of its development, xAI will find an acceptable level of application when tuned with a human-in-the-loop approach.
The following example in Automated Essay Scoring (AES) highlights the impact of xAI on cognification. Kumar & Boulanger (2020a;2020b) describe a way to predict the rubric scores of English essays by applying deep learning techniques over a vast range of writing features. Based on thorough analyses of the distributions of rubric score predictions and distributions of resolved and human raters' rubric scores, they contend that the rubric scoring models closely approximate the performance of human raters. Their study revealed that rubric score prediction does not directly depend on a few word-count based written language features (all word count features were pruned). Many intuitive features were found and selected by each rubric with no dominant features.
The data for this AES system came from a Hewlett-Packard Foundation funded Automated Student Assessment Prize (ASAP) contest (Shermis, 2014). Kaggle 1 collected eight essay datasets of student-written essayswhich The essay samples (1567) were processed by the Suite of Automatic Linguistic Analysis Tools (SALAT) 2 tools that offered a set of 1592 writing features. A subset of these features was used to predict the four rubrics -Ideas and Content, Organization, Sentence Fluency, and Conventions. The resulting performance of this AES is quite comparable to the human raters' scores. On average the human rater scores were identical 63% of times and adjacent (±1) 99% of times. On the other hand, AES predictions on average, after rescaling to a 0-3 scale, are exact and adjacent (±1) 65% and 100% of times, respectively. To the best of our knowledge, only one study attempted to predict rubric scores using D7 (Jankowska et al. 2018), only one study investigated rubric score prediction on D8 (Zupanc & Bosnić 2017), and very few AES systems in general predict essay scores at the rubric level (Kumar et al. 2017). Zupanc and Bosnić (2017) reported an agreement level (QWK) of 0.70 on Organization Rubric (D8).
Based on thorough analyses of the distributions of rubric score predictions and distributions of human rater rubric scores, the outcomes of the AES revealed that the rubric scoring models closely approximate the performance of human raters. Further, the AES system can 'increasingly' improve its predictions as more essays are fed to the AES.
The AES model is 'ethically' shareable with students after subjecting the teacher to strict ethical guidelines in receiving student AES usage data, interpreting their writing competency in conjunction with their use of the AES, and assigning grades for students' submission in the context of a human-in-the-loop approach. Data that are fed to the AES system should also be 'ethically' subjected to commonly used fairness metrics (Mehrabi et al., 2021;Verma & Rubin, 2018;Majumdar et al., 2021) using contemporary AI fairness toolkits such as the IBM AI Fairness 360, Microsoft's Fairlearn, Google's What-If, Aequitas, and Scikit-fairness. Finally, The AES system should be fully 'regulatable'. For instance, when the AES marker's performance becomes equivalent to that of the human raters, the teacher-in-the-loop directive should inject additional rubrics to drive students toward better writing competency as well as drive the AES toward increasing smartness. A LSTM recurrent neural network with an attention mechanism (Alikaniotis et al. 2016;Dong et al. 2017) could be trained to locate spots in student essays that influence the AES system's decision when assigning rubric scores, thus improving the smartness of the AES.
The black box of each rubric scoring model was scrutinized using an xAI system to determine the features and the degree to which they contributed to the determination of rubric scores. A set of the 20 most important features for each rubric emerged, in which at least 15 features were unique to every rubric and did not significantly contribute to the prediction of the other rubric scores. Of the five essays, two of them predict an average score of around 3.9 out of 5.0 while one essay predicts a high score of about 4.8 out of 5.0. Interestingly, these five essays show similar patterns in how features contributed to their predicted scores. One could infer that those students who wrote these three essays have similar writing competencies as well as writing misconceptions. Still, the AES system predicted different final scores for these five essays. Teachers can offer common feedback to such groups and explain how students in such groups can improve their writing competencies corresponding to each writing feature.
The image on the right in the figure points to five other essays where the contributions of writing features on each essay are quite dynamic. That is, the contributions of features are quite varied among the five essays. Despite these variations in contributions of features, these five essays were predicted to obtain a score close to 4.6 out of 5.0. In this case, feedback from the teacher should be more individualized.

Figure 1: Explainable AI in Automated Essay Scoring
Moreover, the study revealed that rubric score prediction does not directly depend on a few word-count based features. Many intuitive features were selected for each rubric in the current AES with no specific dominant feature, making it more difficult to trick the AES system. That is, the AES system could identify writing features that students should not ignore. Further, the AES system could also identify to teachers those students who lack competency in these writing features, thus reinforcing the need for a human-in-the-loop approach, and empowering teachers to triangulate their instruction for greater pedagogical outcomes. That is, student essays can be clustered relative to the number of rubric scores, to discover discriminative patterns in the essays that can lead to improved formative and remedial feedback.
Being an xAI-based system, the AES can be applied on a globular scale, across multiple institutions, thus offering a platform for students to compare and/or contrast their performances among a larger group of learners. The AES follows a method that promotes a degree of transparency among users, and an understanding of the AES' underlying feature-based deep/shallow neural networks.
Mechanisms to introduce AI accountability and build trust between AI and human agents are crucial for the reliable and large-scale deployment of AES systems.

Theoretical model on cognification
Cognification characterizes smart entities that try to become increasingly, ethically, and regulatably smart. The variables contributing to these traits are identifiable and comparable. Accordingly, they arrive at hypotheses that can lead to a theoretical model on cognification. Further, when cognified entities engage with people in a human-in-theloop approach, variables of collaboration between the human and the cognified entity arise in terms of sense-making and decision-making.
Sense-making (Abbass, 2019) enables an entity to a) explore data (e.g., create opportunities to collect new datasets), b) derive data (e.g., create new data from existing datasets), c) interpret data (e.g., longitudinal synthesis), and d) share data (ethically and regulatably). Decision-making (Abbass, 2019) enables it to d) assess opportunities and risks in contexts and situations, e) design, plan and generate courses of actions, f) select and execute one or more actions, g) reason about and explain the choices made (e.g., causal discovery, trust relations), and h) have a degree of autonomy in executing any of these (1 through g) traits of the cognified entity. Cognification of an entity resides at the intersection of its traits of collaboration, sense-making, and decision-making.
Literature defines these traits at various levels of granularity. Sheridan (1992) identified several levels of autonomy. Scholtz (2003) arrived at different types of roles for collaborating partners. For example, Scholtz defines 'supervisor', 'operator', 'teammate', 'bystander', and 'mechanic' as the roles for humans in human-robot interactions. Models of self-regulation, from literature (Winne & Hadwin, 1998), and synthesized from literature (Brokenshire & Kumar, 2009), expand the trait of regulation at several granularity levels. The emergence of trust in collaboration between cognified entities has its own levels of granularity (Abbass, 2019;Mohkami et al. 2015).
In summary, theoretical modelling is essential for the operationalization of cognified entities in teaching, learning, and training. Cognification is liable to suffer misuse, if such models to govern the creation, the application, and the retirement of cognified entities are missing.

Implications of cognified education
Cognification is the art of making an entity increasingly, ethically, and regulatably smarter. As the world's complexity grows, humans are discovering that manual methods of the 3 rd industrial revolution are inadequate to entirely resolve complex problems, necessitating the need for cognification. Cognified entities are a significant part of the 4 th industrial revolution, especially in the context of education.
Deloitte's fifth annual Global Human Capital Trends report and survey (2017) 3 established that the current half-life of a learned skills, which used to be approximately 25 years, is now roughly 5 years. Deloitte determined that the entire length of a career currently averages 65 years and the tenure in a specific career has reduced to about 4.5 years. That is, people are spending more time working and are willing to switch careers more frequently.
Accordingly, learners need to plan for longer-term learning journeys that continue beyond graduation, to reskill and upskill over the course of their conceivably varied careers.
Through this lifelong learning journey, learners need to retain traces of their learning as evidence to support their competencies. Such evidence may originate from either traditional and/or non-traditional learning environments.
Technologies such as Blockchain networks, can assist institutions, employers, and other such agencies verify the new competencies that learners declare. Blockchains networks, while guaranteeing immutability, should be cognified to pave way for automated mapping of learning traces to estimates of learned competencies. Such a cognified mapping could rely on theoretical support that includes both, the human-in-the-loop interactions as well as the supplemental cognification-in-the-loop interactions.
Educational communities, hitherto geographically localized, are embracing globalized learning contexts.
Consequently, competition for work in geographic locales has become global. Global workspaces expect workers to both accommodate cognified tools (as part of their learning journey) and be competent in targeted cognitive capabilities (such as cultural agility and critical thinking). Scholars, technologists, and other stakeholders are painting a future of artificial smartness that incorporates human creativity and intelligence, where multiple systems synergize to provide smart support to augmented sense-and decision-making in teaching, learning, and training domains.