Learning coding/design/AI

AI-enabled blended teaching in Information Theory and Coding: an outcomes-based mixed-methods approach


Abstract

With the rapid development of generative AI (GAI), it is a necessity to explore its effective integration into engineering education. This exploratory study incorporates GAI into an outcome-based education (OBE) framework for Information Theory and Coding (ITC) engineering education using mixed methods analysis of design logic, implementation paths, and learning mechanism. Quantitative analysis revealed that a “Planning–Verification” pattern where the AI outputs are actively cross-validated had significantly higher learning than passive reliance, resulting in a “productive friction” in terms of the emergence of low-performance students and advanced students. By adding verification checkpoints, AI-assisted activities aligned to intended learning goals, encouraging critical thinking and academic integrity. The study provides a transferable design framework and evidence-based guidance for STEM educators to implement AI in engineering education.

1 Introduction

Having been evolved for the last three decades, outcome-based education (OBE) has moved from a policy rhetoric into a practice architecture in curriculum design, and an accreditation measures in higher education mostly particularly is engineering education(Asbari and Novitasari, 2024; Mahrishi et al., 2025). It is also recognised that OBE has shaped the way in which universities have constructed learning experiences through identifying clear, assessable graduate attributes that are informed by industry-standards. The shift from input-oriented education to an outcome-based paradigm has been promoted by accreditation organisations worldwide, which have adopted continuous improvement cycles by which graduates not only achieve attainment of technical competence, but also problem solving skills, communication and appreciation of ethics in addressing complex problems (Cruz et al., 2020; Syeed et al., 2022).

In this context, Information Theory and Coding (ITC) is a core area in the formation of engineers and information scientists. ITC is a mathematical foundation for the technology of communication, data compression and error-control. Fundamental subjects such as Shannon’s source and channel coding theorems, entropy, mutual information, and error control codes are very abstract in nature and require substantial mathematical reasoning (Kumar et al., 2023; Mielikäinen, 2022; Ling and Krishnasamy, 2023). You should be familiar with probability, linear algebra, discrete math and reasoning about algorithms in order to get the most out of it. Due to the abstract nature and cumulative format of ITC, a majority of undergraduates have struggled in ITC with continual misconceptions, fragile transfer of knowledge, and disparate outcomes. In the absence of timely formative feedback, such difficulties typically never are solved and tend to cumulate, which in turn hinder students’ capacity to use theory for solving real engineering tasks(Raza et al., 2024).

A proposed solution to this problem is blended learning, which offers online modules that are flexible as well as face-to-face teaching (Sala et al., 2023; Panday et al., 2025; Younas et al., 2025). Online materials can include narrated derivations, visual simulations, adaptive quizzes and collaborative discussion forums, while class time is used for sense-making and peer teaching activities. With the use of online learning analytics along with in-class formative assessment, faculty can identify learning gaps sooner. Early intervention is important, as mistakes in the beginning can affect subsequent content (Tempelaar et al., 2024; Riegel, 2024).

The advent of generative artificial intelligence (GAI) in the last few years is a further element in this hybrid approach. Powerful language models as well as domain-specific tools are expected to provide a variety of explanations (elaborated, easy, difficult), lead readers through step-by-step derivations(complete proofs on-demand), generate tailored examples and exercises, offer MATLAB/Python code completion and dynamically detect computational or conceptual errors in real-time(Dwivedi et al., 2023; Kasneci et al., 2023; Pinto et al., 2024; Batista et al., 2024; Mo and Crosthwaite, 2025). In ITC, which calls for accurate language processing and the blending of abstract reasoning with computational skills development alone, AI could function as a dynamic scaffold, tailoring support to learners’ pacing and necessity (Mahrishi et al., 2025; Khawrin and Nderego, 2023).

But there are also risks to such opportunities. There is a risk that errors in the AI could lead students to false conclusions, and overdependence on it may reduce their independent reasoning skills; there are also questions about academic integrity (Vieriu and Petrea, 2025). Ethical norms, such as UNESCO’s recommendations on artificial intelligence ethics, emphasize human supervision and transparency, bias correction and data privacy. There are also equity considerations as the uneven availability of AI-enabled devices, reliable Internet access and AI literacy can widen achievement gaps, especially in resource-poor schools. Addressing those challenges imply orienting the use of AI to educational objectives and institutional safeguards (Jobin et al., 2019; Floridi and Cowls, 2022; Holmes et al., 2022).

From an OBE viewpoint, the infusion of AI in teaching ITC can be characterized through “Outcome–Activity–Assessment” correspondence, where each planned outcome is systematically associated to a hybrid of AI-aided and traditional activities; that themselves are accompanied by assessments capturing reality-based proficiency (Rani et al., 2024; Alwakid et al., 2025; Shanto et al., 2025). This paper explores how AI-enabled tasks foster deep learning through sound instruction design. These include: using AI visualization and peer teaching to understand entropy and mutual information; deepening the understanding of error-correcting codes through AI-aided debugging and teacher guidance; using AI simulation to analyze performance and complete academic defenses with conceptual and practical grounds.(Kohen-Vacs et al., 2025; Amiri and Islam, 2025).

It is assumed that if AI is elegantly integrated into a purposeful OBE confirmatory-blended design then student performance and engagement will be improved, and usage patterns reflecting previous readiness, digital literacy and technology access can be revealed (Cao and Phongsatha, 2025; Ji et al., 2024; Mulenga and Shilongo, 2025). Some students who have more mathematical prerequisites and a greater degree of AI literacy than their classmates will use it for deeper exploration while the others would rely on procedural support, and hence justifying targeted scaffolding and equity-focused interventions (Garcia Ramos and Wilson-Kennedy, 2024; MacCallum et al., 2024; Lai and Lin, 2025).

Based on the above background, we design exploratory approach to investigate design of GAI blended instruction in OBE and its impact on student performance. In particular, we investigate the fit between design and learning behavior. We aim to answer the following four questions:

  • RQ1: How does AI blended instruction align to achieving Intended Learning Outcomes (ILOs) in mathematics intensive ITC course?

  • RQ2: What are the dynamic changes in student participation time in pre-class, in-class and post-class?

  • RQ3: How different AI usage patterns (planning-verification vs. passive reliance) affect the learning performance of students with different prior proficiency?

  • RQ4: How do embedded verification checkpoints and reflective tasks stimulate students to think critically and aware of academic integrity during AI interaction?

This research intends to contribute toward the empirical grounding of responsible AI in engineering education. Through a transferable design framework and evidence-based guidelines, it aims to enable educators who want to transform this approach across STEM(Science, Technology, Engineering, and Mathematics) content areas and different institutional contexts, using what we know about mindsets and overcoming the gap between AI’s potential and its practical, ethical, equitable implementation in real classrooms.

2 Materials and methods

2.1 Research design

Since this paper is intended as design exploratory research, we take a sequential explanatory mixed-methods approach to investigate the use of GAI and OBE for blended teaching of an Information Theory and Coding course, considering instructional design logic, implementation pathways, and learning processes, while still being exploratory in observation rather than directly linking GAI use and learning outcomes.

From Figure 1, the research framework is designed as design-based exploratory inquiry into the integration of GAI into an OBE framework for ITC in Figure 1. Three phases are involved in the framework: Phase 1 (Diagnosis and Design) is a baseline survey for AI-based student learning and AI literacy; Phase 2 (Implementation and Intervention) is an AI-driven formative assessment based on process data to adjust instruction across all learning stages; Phase 3 (Evaluation and Reflection) is an AI-driven qualitative inquiry for student engagement and conceptual growth. It is important that the evaluation phase is fed back to refine theoretical construction and practical optimization of the AI-based blended teaching model.

2.2 Participants and instructional context

A total of 48 s-year students in Electronic Information Engineering from a naturally formed class were subjects (28 males, 58.3% and 20 females, 41.7%) in the present study. To cover the full spectrum of the cohort’s character, generic student background details and educational history such as gender profile, prior experience of AI use, and past learning experiences were captured (see Table 1). The students had all completed mandatory courses such as Advanced Mathematics, Probability and Statistics, Digital Signal Processing, which promoted a standard level of knowledge at the start of further studies.

Variable Category n %
Major Electronic information engineering 48 100.0
Gender Male 28 58.3
Female 20 41.7
Year Sophomore 48 100.0
AI usage frequency Rare (<1 time/week) 10 20.8
Occasional (2–3 times/week) 22 45.8
Frequent (≥4 times/week) 16 33.4
Prior AI learning experience Yes 30 62.5
No 18 37.5

Sample size: We performed power analysis using G*Power 3.1 to determine the minimum sample size needed. Cohen (1988) recommended medium effect size (d = 0.5) with significance level α = 0.05 and target power 1 – β = 0.8, the required minimum sample size is 44. Final sample of 48 has reached that level. This ensures that the study has sufficient capability to test teaching effectiveness and improves the reliability of statistical results.

The aim of the ITC course is to provide students with a structured knowledge about the basic theory and tools in formulating problems as well as analyzing solutions, using information-based approaches. As they learn the subject, students master a number of fundamental concepts such as information entropy, Shannon channel capacity, and the source and channel coding theorems, as well as become acquainted with error-correcting codes that are used in practice. Such a pedagogical approach allows them to connect abstract theory with practical implementation and lay solid groundwork for follow-up courses and research activities.

The course was delivered in a blended mode comprising both online and face-to-face teaching, following an OBE approach. To inform teaching and assessment, four sets of Intended Learning Outcomes (ILOs) were enumerated including mastering the concept of ITC principles, performing encoding/decoding with performance analysis and optimization, designing and validating coding schemes for different channel conditions, and fostering academic integrity as well as professional responsibility in engineering practice as well as AI engagement.

2.3 Blended teaching and AI-enabled design

The instructional design involved three stages: pre-class preparation, in-class activities, and post-class follow-up. Based on OBE principles, the design integrated GAI tools in all stages to ensure a seamless learning experience for students. The goal is to align outcomes, activities and assessments in order to improve the learning process.

To ensure the service stability and the accuracy of data and to ensure the applicability of Chinese learning management system, we designed and implemented a localized generative artificial intelligence teaching solution based on the basic domestic learning management systems (Yuketang). Students choose specific AI tools such as DeepSeek, Zhipu AI (GLM), Ernie Bot, and Tongyi Lingma according to their learning tasks. Students conduct a variety of activities ranging from concept exploration, literature help, code debugging. All AI tools follow the structured guidelines embedded in the LMS and follow the closed loop process of “guidance within the platform, practice outside the platform, and return results within platform. Students must submit “AI assisted creation logs” including screenshots of key processes and reflections to LMS to record learning progress. We introduce CRIS prompt (Concept clarification, Role-based step-by-step guidance, Interaction debugging, and Simulation support). The prompt words help students to actively use AI-generated visual scripts to verify concepts and seek debugging ideas rather than direct answers. This transforms AI into a cognitive partner supporting its theory, empirical analysis and academic argumentation toward deep learning.

During pre-class, students interacted with the Learning Management System (LMS) to view introduction videos and engage with text materials, as well as use GAI to generate analogies, derivation prompts, and potential solution strategies for example problems. To focus preparation and encourage critical thinking, a brief quiz tested understanding and prompted students to consider the trustworthiness and constraints of AI-generated material.

In a follow-up in-class investigation of how these prompts might be directly instantiated to increase metacognitive support, students were organized into groups so that they could collaborate on group-based exercises that included using GAI-generated cases to help refine mathematical derivations, contrast alternative algorithmic approaches, and debug code implementations in MATLAB or Python. These group activities were reinforced with instant quizzes and Meta-Metacognitive Feedbacks (MMGFs) which did not only obviously promote students’ interaction with peers as well as their immediate feedback but also provided a visible measurement of the achievement of the intended learning outcomes under OBE in these three levels.

In the postclass-extension phase, students carried out their homework and small projects which incorporated AI-aided output into their independent labor. One assignment consisted of submitting reflective journals for the submission as a proof of how they applied AI tools, e.g., what methods were used to check accuracy of generated contents and what techniques were used for refinement. Instructors reviewed these submissions with the help of rubric that evaluated not only technical quality but also responsibility and ethics in AI use.

2.4 Data collection

Consistent with the mixed-methods design, data were collected from multiple sources to ensure triangulation and depth of insight

2.4.1 Quantitative data

A 24-item questionnaire was used to assess demographics and three constructs, with good internal consistency: AI Usage Patterns (i.e., “I use AI to break down complex steps” Cronbach’s α = 0.85) Perceived Learning Outcomes (i.e., “The AI tools helped me understand the concept of Channel Capacity better” Cronbach’s α = 0.89) and Ethics Awareness (i.e., “I explicitly check AI code with textbook formulae before submission” Cronbach’s α = 0.82). This was supplemented by LMS records (visit frequency, time-on-task), in-class quiz scores, assignment grades, and pre/post-test results.

2.4.2 Qualitative data

Information was gathered through classroom observations (focusing on tool use and interaction patterns), student reflective journals, written work, and teaching logs containing formative feedback from the instructional team.

Table 2 summarizes the systematic consistency between these data sources and specific research questions.

Research question (RQ) Focal dimensions Quantitative data sources Qualitative data sources
RQ1: ILO alignment Achievement of learning outcomes; curricular coherence. Pre/post-tests; Assignment grades; Final project scores. Teaching logs; Formative feedback from the team.
RQ2: Temporal dynamics Student engagement across pre-, in-, and post-class stages. LMS analytics (clicks, time-on-task); Quiz completion rates. Classroom observations; Interaction patterns during inquiry.
RQ3: AI usage patterns Correlation between AI behavior (Planning vs. Reliance) and proficiency. 24-item questionnaire (Usage frequency/purpose); GPA/Prior grades. Student reflective journals; Samples of AI-assisted written work.
RQ4: Critical thinking Awareness of academic integrity; Effectiveness of verification checkpoints. Questionnaire (Perception of ethics/fairness). Reflective tasks; Dialogue logs between students and GAI.

Alignment of research questions, data sources, and analytical dimensions.

2.5 Instrument validation and quality control

The research instruments were validated through a multi-stage process. For the knowledge tests, items were designed based on the course syllabus and the four OBE learning outcomes, followed by an expert review to ensure content validity. To avoid memory effects while maintaining comparisons, pre and post test forms used parallel forms covering identical concepts with different problem cases. For skills and ethics rubrics, all subjective assignments and test items were graded by instructors blinded to the identities of students and submission time (pre vs. post). For the survey, a three-step process was used: pilot testing, expert panel review, and cognitive interviewing. Fifteen former ITC students filled in the instrument in a pilot phase to test the sequence of items and the understandability of technical terms. Two AI-supported instruction and OBE-oriented curriculum design experts then examined the questionnaire and suggested modifications. To enhance authenticity and understandability, cognitive interviews were employed with five current students to verify that AI related terms and situational settings were valid. For qualitative analysis, deductive and inductive thematic analysis was used; the deductive framework was based on the four OBE learning outcomes, while inductive coding was done to identify new themes. Two researchers coded the data independently and then compared their codes, discussing discrepancies and refining both coaches’ agreement (inter-rater) through an iterative process.

2.6 Data analysis

To ensure the evaluation of the AI-enabled blended teaching model, both quantitative and qualitative analyses were performed with more rigour.

2.6.1 Quantitative analyses

Descriptive statistics and inferences (paired-samples t-tests and correlation tests) were performed to study associations and differences between AI usage frequency, purposes and ILO achievement. To compensate for Type I errors due to multiple comparisons (four ILOs and multiple engagement indicators) Bonferroni correction was applied. Also, to model the interactions between student proficiency and AI usage, two-way ANOVA and hierarchical regression analysis were used to model the interaction and ensure the reliability of the results.

2.6.2 Qualitative analyses

Qualitative data from reflective journals and teaching logs were coded into thematic categories using combined deductive and inductive approaches. These patterns were combined with quantitative results to provide a general interpretation of how AI support worked at different stages. Sample coded excerpts and mappings between the thematic codes and research claims are provided in Section 3.6. Coding was iteratively refined to maintain high reliability between coders.

3 Results

The following section describes the learning outcomes from integrating GAI into the ITC course under OBE that are categorized in four dimensions: (a) acquisition of OBE-compliant competency; (b) students’ engagement at levels of blended instruction; (c) usage patterns generated by using AI and its association to learning performance based on evidence, and (d) emergence of explanations regarding how effects were observed. Quantitative results are presented prior to qualitative and finally a mixed synthesis of both strands of evidence.

To summarize students’ baseline characteristics, Table 3 summarizes the paired-samples t-test results of student performance on four ILOs. To avoid Type I errors from multiple comparisons, we applied a Bonferroni correction. Results show statistically significant improvement of all ILOs following intervention: conceptual understanding [t(47) = 4.35, p < 0.001, Cohen d = 0.63], skills and application [t(47) = 4.10, p < 0.001, Cohen d = 0.59], problem solving and innovation [t(47) = 4.50, p < 0.001, Cohen d = 0.65], ethics and professional competence [t(47) = 3.60, p < 0.001, Cohen d = 0.52]. All effects exceeded medium (d > 0.50), suggesting positive association between AI enhanced blended teaching model to student gains.

Intended learning outcome (ILO) Pre-test, Mean (SD) Post-test, Mean (SD) Mean Difference t(df) p value Cohen‘s d
ILO1 (Conceptual understanding) 76.2 (8.4) 83.5 (8.7) +7.3 4.35 (47) < 0.001 0.63
ILO2 (Skills and application) 74.5 (7.6) 80.2 (8.0) +5.7 4.10 (47) < 0.001 0.59
ILO3 (Problem-solving and innovation) 73.1 (8.2) 81.0 (8.6) +7.9 4.50 (47) < 0.001 0.65
ILO4 (Ethics and professionalism) 74.0 (7.8) 78.5 (8.1) +4.5 3.60 (47) 0.001 0.52

Paired-samples t-test results for the four intended learning outcomes (ILOs) before and after the instructional intervention.

p-values were interpreted using the Bonferroni correction to account for multiple comparisons across four ILOs. The adjusted significance threshold was set at α = 0.05/4 = 0.0125. All reported p-values remain statistically significant after correction.

3.1 OBE-aligned learning outcomes

3.1.1 Knowledge and understanding (ILO1)

Follow-up analyses showed that potential learning gains were maintained on several dimensions. Conceptual knowledge of students was improved, and they showed increased performance in basic themes of Information Theory and Coding like mutual information, entropy and channel capacity. These benefits were found particularly for reasoning-based and deep comprehension tasks as opposed to mere recall. Transfer to other stimuli was also enhanced, as evidenced by better performance in near-transfer tasks at the level of formula capacity with different levels of signal-to-noise ratio. This indicates that the pre-class structure supported not just short term prompting but rather concept reorganization. Reflective journal entries also demonstrated a reduction in common misconceptions. For example, the widely popular belief that increased redundancy necessarily lowers error rates was successively superseded by mechanism-based accounts which captured the trade-offs among coding rate code distance and noise.

The ceiling effect: The average pre-test score of concepts is 76.2 (Standard Deviation: SD = 8.4), representing medium to high baseline level and possibly ceiling effect. However, in further analysis of scores, the score improvement after tests is mainly based on medium and difficult questions (cognitive level of application and analysis), and the average score for high difficulty items rises from 52.3 to 68.7. The intervention promoted deep learning and high order cognitive engagement, rather than surface knowledge acquisition.

3.1.2 Skills and application (ILO2)

The evidence after completion of these interventions has indicated significant improvement in gross procedural and analytical competency. Furthermore, in assignments, students made fewer algebraic errors while deriving basic decoders (i.e., ML and MAP) and the steps to arrive at solutions appeared more systematic, indicating greater procedural fluency. Computational skill also increased as MATLAB/Python submissions gained a clear modular structure separating encoding and decoding from channel simulation, contained better test cases for the same code, turned disparate scripts into structured reusable workflows, which led to both more efficient as well as maintainable designs. Furthermore, more groups did systematic performance analysis such as plotting BER vs. Eb/N0 at various code rates and reinforcing their design decisions by engineering considerations with better analytical depth.

3.1.3 Problem-solving and innovation (ILO3)

The results of the project indicated increasing levels of maturity in design thinking and responsibility for their products. Further, proposals evidenced broader decision making through context dependency, e.g., choosing between convolutional and block codes based on latency and complexity constraints, indicating greater capacity in defining problems and controlling design bounds. Students also reported iterative processes that began with baseline designs, moved on to tool-suggested strategies, and then to empirical investigations and fine-tuning of their solutions as evidence that although external scaffolding facilitated idea generation, it was the empirical confirmation that informed final decisions. Additionally, they tweaked the proposed prototype designs with independent changes such as parameter tuning and numerous decoding strategies, and justified these divergences through ablation tests, which simultaneously reflected originality and accountability.

3.1.4 Ethics and professionalism (ILO4)

Reflections provided evidence of the increased consciousness of transparency, evaluation and fairness related to technology use following the intervention. Students’ journals significantly more often stated or recorded external tools, prompts, versions and functional roles. They also made a clearer distinction between generated content and their own original work. They also provided evidence of more active critical verification (finding wrong outputs, such as fake theorems or wrong inequality signs) and keeping traces of verification in a form of cross-checks with another textbook and simulation-based counterexamples. Concerns about equity were raised in classroom discussions, as there was a danger that students who used the tool more would have an advantage over those who did not, and it was unclear where legitimate help ended and excessive support began for technical homework.

In order to provide the facilitatory aspect of instructional interventions within OBE framework in a more globalized form, the quantitative trends as exuded by Table 3 reveal only superficial differences, though ILOs attainment rates do make it possible when we compare to be approximated using statistics. The inherent learning behaviors, nature of mechanisms and typical outputs for each ILO remain still to be more interconnected. To address this, Table 4 proposes a mapping structure between anticipated learning results, quantitative proxies, qualitative evidence and student works in order to align the core performance dimensions of the ILOs categories by means of several data sources.

Intended learning outcomes (ILOs) Quantitative indicators Qualitative themes Representative student outputs
ILO1: Knowledge and Understanding Ability to accurately explain and comprehend fundamental concepts, theorems, and performance limits in ITC, and to extend understanding to new contexts Knowledge test scores
(Mean, Standard Deviation)
Understanding of basic mechanisms; clear recognition of AI design principles; correction of misconceptions Knowledge test results; course notes; reports on AI design principles
ILO2: Skills and Application
Ability to construct and analyze system models in ITC, and to conduct simulation and system design
Assignment scores
(Mean, Standard Deviation)
System modeling and simulation; ability to use tools for model construction and testing System modeling reports; simulation outputs
ILO3: Problem-Solving and Innovation Ability to identify and analyze problems in ITC and propose innovative solutions Project scores
(Mean, Standard Deviation)
Development of innovative solutions; contextually appropriate design choices; detailed evaluation Project design reports; presentations of innovative solutions
ILO4: Ethics and Professionalism
Ability to identify ethical issues in engineering practice and conduct evaluative judgments
Ethics test scores
(Mean, standard deviation)
Ethical judgment and professional responsibility; transparent attribution; fairness considerations Ethics test results; analytical reports on ethical issues

Quantitative indicators, qualitative themes, and representative student outputs corresponding to the four intended learning outcomes (ILOs).

3.2 Student engagement across stages of blended teaching

Analysis of learning activities indicated gains in both the pre-, in- and after-class phases. LMS data also showed that the concept-check questions prompted students to access the introductory videos and other preparatory resources more often, prolonging their time on task and enhancing pre-class preparation. Participation was more even in group work, and guiding questions calling for critiques of derivations elicited peer explanations and encouraged error identification. Following the class, reflections in their journals became more detailed with students making explicit what informed their understanding and uncertainty, as well as how they tackled these uncertainties; also, a greater number of students voluntarily submitted subsequent supplementary sensitivity analyses down to the factorisation(s), revealing a higher level of engagement and consolidation.

In order to have a more systematic view on the behavioral performance of students in various stages of blended instruction, it should be first of all showed their baseline distributions on major factors. Table 5 presents descriptive statistics of tool use frequency, course engagement, and self-efficacy, which are then used as a basis for examining subsequent changes behaviourally and mechanisms beneath them. Figure 2 shows the time-varying behavior of student engagement in the three blended instruction phases. As shown, engagement started from the baseline in the pre-class phase (M = 75, SD = 8), then rapidly reached the highest level in the in-class phase (M = 87, SD = 7). Engagements decreased in the post class phase (M = 80, SD = 9), but were significantly higher than that in the pre-class phase, suggesting that the in-class interaction stimulated interest that carried over into post-class activities. Paired sample t tests (Table 6) showed that the in-class engagement was greater than pre-class (t = 9.85, p < 0.001, Cohen’s d = 1.58) and post-class (t = −5.71, p < 0.001, Cohen’s d = 0.88). Post-class engagement was higher than pre-class (t = 3.87, p < 0.001, Cohen’s d = 0.61).

Variable M SD Min
AI use frequency (times/week) 3.46 1.21 1
AI use duration (minutes/week) 92.15 35.42 30
Perceived learning effectiveness 4.12 0.58 3
Course engagement (1–5 scale) 4.05 0.64 2.5
Programming self-efficacy 3.88 0.71 2

Descriptive statistics of key variables (N = 48).

M = Mean; SD, standard deviation.

Comparison Mean ± SD Mean Difference t(47) p Cohen’s d
In-class vs. Pre-class 87.0 ± 7.0 vs. 75.0 ± 8.0 12.00 9.85 < 0.001 1.58
In-class vs. Post-class 87.0 ± 7.0 vs. 80.0 ± 9.0 −7.00 −5.71 < 0.001 0.88
Post-class vs. Pre-class 80.0 ± 9.0 vs. 75.0 ± 8.0 5.00 3.87 < 0.001 0.61

Descriptive statistics of key variables (N = 48).

Given the large in-to-pre-class engagement effect (d = 1.58) we considered two other possible explanations: Hawthorne effects and instructor bias. No correlation between instructor intervention and engagement (r = 0.13, p = 0.36). Engagement continued to remain stable throughout the in-class phase with no significant difference between early and late stages Mdiff = 1.0 (95% confidence interval (CI) [−1.1, 3.1], t = 0.98, p = 0.33). The small CI around zero indicates no decline over time. Thus, we believe the AI-aided tasks provided real-time cognitive scaffolding for participation rather than temporary observational effects.

These results show that in-class interaction is crucial to increase engagement and that the instructional effects can last beyond class sessions. In addition, the increase from in class to post class shows that group activities and reflective tasks became more stimulating and intellectually challenging, which helped sustain students’ commitment in and out of class. This pattern of results is consistent with the “instructional coherence hypothesis,” that learning trajectories are more stable and engagement is higher when the interventions are conceptually coherent across pre-, in-, and post-class.

3.3 Quantitative analysis of AI usage patterns and learning outcomes

Student tool use logs revealed that AI was used primarily for conceptual clarification, stepwise derivation guidance and debugging. From this, there emerged a critical Planning–Verification pattern, where students who used the tool to request problem-solving outline and independently verified details were likely to learn better than those who practiced passive reliance. To model this observation, a two-way ANOVA was performed. We found that the AI usage pattern (Planning–Verification vs. passive reliance) had a significant effect on student proficiency (low, medium, high) on post-test scores [F(2, 42) = 4.42, p = 0.017,ηp2 = 0.17]. This results indicate statistically that the link between AI usage and learning outcomes varies with students’ proficiency.

Figure 3 shows the effect of Student Proficiency (Low, Medium, High) and AI Usage Pattern on post-test scores. In the figure, “Active Use” pattern represents “Planning–Verification” mode. The active pattern was associated with consistently higher post-test scores than passive reliance in all proficiency levels. Mean scores for active users were around 75 for low-proficiency groups, 85 for medium-proficiency groups and nearly 90 for high-proficiency groups. In contrast, passive users scored significantly lower in each corresponding group. The results also show that benefit from active Planning–Verification pattern was greatest among high-proficiency students. These results quantitatively show that active use of AI results leads to better academic performance rather than passive consumption of its outputs.

3.4 Qualitative mechanisms explaining the observed effects

The pedagogical designs employed in the current study seemed to facilitate deep learning by a convergence of various activity systems. Productive friction, established by the necessity to revise derivations, induced “desirable difficulties” that facilitated conceptual change and procedural accuracy. This is consistent with the observed ‘Planning–Verification’ pattern where students did not just passively accept AI outputs, but did active cognitive reappraisal. One student wrote in his journal, “The AI laid out steps, but it felt too direct. I checked the simulations and realized that AI model does not account for a particular edge case. Finding the difference was when I learned the principle, not when I saw the answer.” This is vividly illustrated by productive friction. In this case, friction of verification, rather than passive acceptance, was the catalyst for deeper learning.

Externalized reasoning guided by structured templates (articulating assumptions, offering method, justifying steps and predicting ways things might go wrong) prompted students to express their thinking in this problem-solving process an important feature that helps peers and instructors diagnose students’ misconceptions more effectively. Normalization was supported by repeated attribution checks to distinguish between external help and learning, student work, and validation. This procedure also increased transparency and decreased ambiguity about what constituted appropriate levels of assistance, while redirecting focus toward verification and reflection. Instructor regulation was ultimately facilitated through that time extended phases of tool use (ideation, critique, and testing) along with grading criteria weighted toward validation plus reflection. Such interventions enable students to shift from obtaining answers to coping with the nuances of concepts.

3.5 Integration of quantitative and qualitative evidence

Synthesis of the results provides a more holistic view of student learning. The significant quantitative interaction effects between AI usage patterns and proficiency (Section 3.3) are directly correlated with qualitative effects (Section 3.4). We conclude that the “Planning-Verification” pattern is the behavioral expression of “productive friction” and “active cognitive reappraisal” which suggests that the better learning outcomes are not coincidental but are related to deeper cognitive engagement.

In addition, the whole data show that learning was diverse. Beyond the post-test scores, the data showed improvements in practical and professional aspects, as projects were more original and students tended to be more ethically responsible. This suggests that the intervention was associated with many positive learning outcomes.

3.6 Robustness and sensitivity checks

The protocol analysis disclosed several patterns that were consistently represented in instructional contexts. Whenever digital tools were described in terms of their support for planning and verification, students from all levels of school mathematics and programming contributed similarly regardless of their software experience; by contrast the effects were attenuated where reliance on the tool for answers directly was more prominent. Cross-validated evidence from tests, assignments and reflexive journals served to triangulate findings in a cumulative nature and reduce mono-method bias, whereas cases of high test performance with little reflection revealed domains for targeted teaching feedback. Furthermore, the greatest increases in learning were observed in modules where supportive resources permeated across pre-class, in-class, and post-class with compulsory checking opportunities, and to have a potential threshold effect of instructional coherence. It is important, however, to interpret these findings within the context of the study’s design. The study was conducted in a single cohort without control or comparison group. Since these results suggest that teaching methods can be strongly linked to educational achievements, they do not prove causality.

4 Discussion

This paper investigated the use of AI tools in blended OBE-based education in a Information Theory course. The results section presented quantitative and qualitative data on the effects of the intervention on student learning and outcomes. The results section now interprets these results directly to answer the four questions introduced in the introduction, contextualizes them with the existing literature and presents key implications for teaching.

4.1 Alignment of AI-enabled design with intended learning outcomes (RQ1)

Our first question was how the AI-based design correlated with the achievement of ILOs. Our results suggest that we have a strong and broad alignment supporting the view that combining digital innovation with OBE principles is a promising way of reforming curriculums (Shanto et al., 2025; Mulenga and Shilongo, 2025). The quantitative results (Table 3) showed statistically significant improvement in all four ILOs with conceptual understanding and problem solving having the largest effect sizes (d > 0.6). The “Planning-Verification” pattern found in Section 3.3 showed higher post-test score and showed that the design supported the acquisition of the core conceptual knowledge. More originality of the project and better ethical attributions (Section 3.5) showed that the intervention also supported higher order ILOs related to practical application, innovation and professional conduct (Section 3.5). This shows that well-structured AI interventions can support deep, outcome-based learning.

4.2 Temporal dynamics of student engagement (RQ2)

The second question was what happens between different stages of student engagement. Detailed results from Sections 3.2 and 3.6 show that engagement was not a series of events but a continuous flow with in-class activities acting as peaking post-class engagement. The significant increase in engagement from pre-class to in-class (d = 1.58), and sustained high level post-class support the “instructional coherence hypothesis”. As discussed in Section 3.6, the largest learning increases occurred when support resources flooded Pre-class, In-Class and Post-Class. Pre-class AI-aided preparation significantly increased readiness for abstract concepts (e.g., entropy) while post-lesson reflection tasks focused on error diagnosis initiated metacognitive regulation, similar to recent AI-aided error analysis (Zhang et al., 2025). This suggests that AI tools are very effective if they are integrated seamlessly into a continuous cycle (Wong, 2024).

4.3 Association of AI usage, proficiency, and performance (RQ3)

The third important question is how different AI patterns affect learning performance of students with different proficiency levels. Our ANOVA results (Section 3.3) showed how a significant interaction effect exists [F(2, 42) = 4.42, p = 0.017] for active “Planning-Verification” use and that it creates “productive friction”. For weaker backgrounds, AI scaffolding helped them overcome “gap barriers” and engage in complex derivations. For solid backgrounds, AI scaffolding was used to “test limits, contrast on paper theory and simulation output and explore new design alternatives”. This is important because it suggests that AI tools can function as differential support systems (scaffold novices and extend experts) which is consistent with equity-driven AI education (Garcia Ramos and Wilson-Kennedy, 2024; Kohen-Vacs et al., 2025).

4.4 Fostering critical thinking and academic integrity (RQ4)

Finally, our fourth research question was how embedded checkpoints and reflective tasks promoted critical thinking and academic integrity. Section 3.4 provides the direct answer. Structured templates encouraged “external reasoning” by requiring students to articulate and justify their thinking rather than accepting AI outputs. Repeated attribution checks and grading criteria weighted toward validation directly addressed academic integrity. As seen in Section 3.5, these measures led to more consistent ethical attributions and more clear distinction between generated content and original work. This transforms the AI from a cheating tool into a tool for teaching responsible scholarship (Bittle and El-Gayar, 2025). Large scale implementation requires not only technological adoption but also a reframing of course design in which students are explicitly asked to admit and justify external input.

4.5 Implications, limitations, and future directions

The results suggest that effective AI integration requires the shift from viewing tools as “add-ons” to thinking of tools as integral parts of a “productive friction” learning system. However, our approach is not without limitations. We stated explicitly in Section 3.6 that the single cohort design without control group makes our results strongly association-not causality. Future work should use experimental or quasi-experimental designs. We emphasized the benefits but we need more attention in governance issues such as “dodgy outputs” and over-reliance, and we need task structure and validity checks in the design (Lund et al., 2025; Park and Doo, 2024).

5 Conclusion

In conclusion, the integration of AI-enabled technologies within a blended, outcomes-based framework for mathematics-intensive engineering courses holds great promise for supporting conceptual understanding, procedural skills development and student autonomy. The evidence suggests that the value of such tools does not simply derive from providing expedient answers, but rather from scaffolding “productive friction”—guiding students to bridge abstractions and instantiations through a structured “Planning-Verification” process. This approach promotes increased equity while upholding the demands for rigor. As observed, the differential effects across the proficiency continuum confirm that when well-managed, these interventions can reduce learning barriers for at-risk learners while simultaneously pushing the exploratory boundaries for more advanced ones.

However, the effectiveness of this strategy relies on instructional models that guard against passive reliance and impeded critical thinking. This is risk that can be mitigated by building in verification checks, setting clear guidelines for integrity and offering avenues of demonstrable authentic understanding for learners. Incorporated into a loop of pre-class preparation, guided discovery, critical evaluation and reflective integration, GAI serve as enablers of deeper levels of deeper cognitive engagement rather than shortcuts to solutions.

Although our results strongly link this design to positive learning results, future experiments may be needed to prove that. In general, we wish to continue cross-disciplinary trials to refine the teaching models which fit international standards and local realities to promote scientific and ethical advances of technology-enhanced engineering education.

Statements

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

Ethical approval was not required for the study involving humans in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required from the participants or the participants’ legal guardians/next of kin in accordance with the national legislation and the institutional requirements.

Author contributions

XY: Investigation, Validation, Methodology, Conceptualization, Writing – review & editing, Supervision, Software, Resources, Visualization, Funding acquisition, Formal analysis, Project administration, Writing – original draft, Data curation. RC: Writing – original draft, Conceptualization, Data curation, Visualization. ZC: Project administration, Resources, Software, Investigation, Writing – original draft.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This research was funded by the provincial project of Jiangxi Provincial Higher Education Teaching Reform Research, grant number JXJG-23-23-20.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abbreviations

OBE, Outcomes-Based Education; ITC, Information Theory and Coding; STEM, Science, Technology, Engineering, and Mathematics; GAI, Generative Artificial Intelligence; ILOs, Intended Learning Outcomes; LMS, Learning Management System; MMGFs, Meta-Metacognitive Feedbacks; SD, Standard Deviation.

References

  • AlwakidW. N.DahriN. A.HumayunM.AlwakidG. N. (2025). Exploring the role of AI and teacher competencies on instructional planning and student performance in an outcome-based education system. Systems13:517. doi: 10.3390/systems13070517

  • AmiriS. M. H.IslamM. M. (2025). Enhancing Python programming education with an AI-powered code helper: design, implementation, and impact. Softw. Eng.11, –17. doi: 10.11648/j.se.20251101.11

  • AsbariM.NovitasariD. (2024). Outcome-based education model: its impact and implications for lecturer creativity and innovation in higher education. Int. J. Social Management Studies5, 2231. doi: 10.5555/ijosmas.v5i5.447

  • BatistaJ.MesquitaA.CarnazG. (2024). Generative AI and higher education: trends, challenges, and future directions from a systematic literature review. Information15:676. doi: 10.3390/info15110676

  • BittleK.El-GayarO. (2025). Generative AI and academic integrity in higher education: a systematic review and research agenda. Information16:296. doi: 10.3390/info16040296

  • CaoS.PhongsathaS. (2025). An empirical study of the AI-driven platform in blended learning for business English performance and student engagement. Lang. Test. Asia15:39. doi: 10.1186/s40468-025-00376-7

  • CohenJ. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.

  • CruzM. L.Saunders-SmitsG. N.GroenP. (2020). Evaluation of competency methods in engineering education: a systematic review. Eur. J. Eng. Educ.45, 729757. doi: 10.1080/03043797.2019.1671810

  • DwivediY. K.KshetriN.HughesL.SladeE. L.JeyarajA.KarA. K.et al. (2023). Opinion paper: “so what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag.71:102642. doi: 10.1016/j.ijinfomgt.2023.102642

  • FloridiL.CowlsJ. (2022). “A unified framework of five principles for AI in society,” in Machine Learning and the City: Applications in Architecture and Urban Design, ed. CartaS.. (Cham: Springer), 535545.

  • Garcia RamosJ.Wilson-KennedyZ. (2024). Promoting equity and addressing concerns in teaching and learning with artificial intelligence. In Front. Educ. (Vol. 9:1487882). Frontiers Media SA. doi: 10.3389/feduc.2024.1487882

  • HolmesW.Porayska-PomstaK.HolsteinK.SutherlandE.BakerT.ShumS. B.et al. (2022). Ethics of AI in education: towards a community-wide framework. Int. J. Artif. Intell. Educ.32, 504526. doi: 10.1007/s40593-021-00239-1

  • JiH.SuoL.ChenH. (2024). AI performance assessment in blended learning: mechanisms and effects on students’ continuous learning motivation. Front. Psychol.15:1447680. doi: 10.3389/fpsyg.2024.1447680,

  • JobinA.IencaM.VayenaE. (2019). The global landscape of AI ethics guidelines. Nat. Mach. Intell.1, 389399. doi: 10.1038/s42256-019-0088-2

  • KasneciE.SeßlerK.KüchemannS.BannertM.DementievaD.FischerF.et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ.103:102274. doi: 10.1016/j.lindif.2023.102274

  • KhawrinM. K.NderegoE. (2023). Opportunities and challenges of AI towards education: a systematic literature review. Int. J. Language Research Education Studies13, 266271.

  • Kohen-VacsD.UsherM.JansenM. (2025). Integrating generative AI into programming education: student perceptions and the challenge of correcting AI errors. Int. J. Artif. Intell. Educ.35, 31663184. doi: 10.1007/s40593-025-00496-4

  • KumarN.KediaD.PurohitG. (2023). A review of channel coding schemes in the 5G standard. Telecommun. Syst.83, 423448. doi: 10.1007/s11235-023-01028-y

  • LaiC. H.LinC. Y. (2025). Analysis of learning behaviors and outcomes for students with different knowledge levels: a case study of intelligent tutoring system for coding and learning (ITS-CAL). Appl. Sci.15, 124. doi: 10.3390/app15041922

  • LingL. S.KrishnasamyS. (2023). Information technology capability (ITC) framework to improve learning experience and academic achievement of mathematics in Malaysia. Electron. J. E-Learn.21, 3651. doi: 10.34190/ejel.21.1.2169

  • LundB. D.LeeT. H.MannuruN. R.ArutlaN. (2025). AI and academic integrity: exploring student perceptions and implications for higher education. J. Acad. Ethics23, 15451565. doi: 10.1007/s10805-025-09613-3

  • MacCallumK.ParsonsD.MohagheghM. (2024). The scaffolded AI literacy (SAIL) framework for education: preparing learners at all levels to engage constructively with artificial intelligence. He Rourou1:23. doi: 10.54474/herourou.v1i1.10835

  • MahrishiM.RamakrishnaS.HosseiniS.AbbasA. (2025). A systematic literature review of the global trends of outcome-based education (OBE) in higher education with an SDG perspective related to engineering education. Discov. Sustain.6, 121. doi: 10.1007/s43621-025-01496-z

  • MielikäinenM. (2022). Towards blended learning: stakeholders’ perspectives on a project-based integrated curriculum in ICT engineering education. Ind. High. Educ.36, 7485. doi: 10.1177/0950422221994471

  • MoZ.CrosthwaiteP. (2025). Exploring the affordances of generative AI large language models for stance and engagement in academic writing. J. Engl. Acad. Purp.75:101499. doi: 10.1016/j.jeap.2025.101499

  • MulengaR.ShilongoH. (2025). Hybrid and blended learning models: innovations, challenges, and future directions in education. Acta Pedagog. Asiana4, 113. doi: 10.53623/apga.v4i1.495

  • PandayA.RayT.JalandharachariA. S.GopinathG. (2025). Insights into blended learning research: a thorough bibliometric study. Discov. Educ.4, 120. doi: 10.1007/s44217-025-00439-0

  • ParkY.DooM. Y. (2024). Role of AI in blended learning: a systematic literature review. Int. Rev. Res. Open Distrib. Learn.25, 164196. doi: 10.19173/irrodl.v25i1.7566

  • PintoG.De SouzaC.RochaT.SteinmacherI.SouzaA.MonteiroE. (2024). “Developer experiences with a contextualized AI coding assistant: usability, expectations, and outcomes,” in Proceedings of the IEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI, eds. KästnerC.YonekiE.. (Piscataway, NJ: IEEE), 8191. doi: 10.1145/3649323.3649363

  • RaniS.KaurG.DuttaS. (2024). “Educational AI tools: a new revolution in outcome-based education,” in Explainable AI for Education: Recent Trends and Challenges, eds. RaniS.KaurG.DuttaS.. (Cham: Springer Nature Switzerland), 4360.

  • RazaK.LiS.ChuaC. (2024). A conceptual framework on imaginative education-based engineering curriculum. Sci. Educ.33, 923936. doi: 10.1007/s11191-022-00415-2

  • RiegelC. (2024). “Leveraging online formative assessments within the evolving landscape of artificial intelligence in education,” in Assessment Analytics in Education: Designs, Methods and Solutions, eds. SahinM.IfenthalerD.. (Cham: Springer International Publishing), 355371.

  • SalaR.MaffeiA.LjubićS.SkokiA.PezzottaG.ZammitJ. P.et al. (2023). Blended learning in the engineering field: a systematic literature review. Comput. Appl. Eng. Educ.31:e22712. doi: 10.1002/cae.22712

  • ShantoS. S.AhmedZ.JonyA. I. (2025). A proposed framework for achieving higher levels of outcome-based learning using generative AI in education. Educ. Technol. Q., 2025(1), 115. doi: 10.55056/etq.788

  • SyeedM. M.ShihavuddinA. S. M.UddinM. F.HasanM.KhanR. H. (2022). Outcome-based education (OBE): defining the process and practice for engineering education. IEEE Access10, 119170119192. doi: 10.1109/access.2022.3219477

  • TempelaarD.RientiesB.GiesbersB. (2024). Dispositional learning analytics and formative assessment: an inseparable twinship. Int. J. Educ. Technol. High. Educ.21:57. doi: 10.1186/s41239-024-00489-8

  • VieriuA. M.PetreaG. (2025). The impact of artificial intelligence (AI) on students’ academic development. Educ. Sci.15:343. doi: 10.3390/educsci15030343

  • WongK. K. (2024). “Blended learning and AI: enhancing teaching and learning in higher education,” in International Conference on Blended Learning, eds. CheungS. K. S.LeeL.-K.LiK. C.FuJ. C. K.Trausan-MatuS.. (Singapore: Springer Nature Singapore), 3961.

  • YounasM.El-DakhsD. A. S.JiangY. (2025). Knowledge construction in blended learning and its impact on students’ academic motivation and learning outcomes. Front. Educ.10:1626609. doi: 10.3389/feduc.2025.1626609

  • ZhangY. F.LiH.SongD.SunL.XuT.WenQ. (2025). From correctness to comprehension: AI agents for personalized error diagnosis in education. arXiv preprint arXiv:2502.13789. doi: 10.48550/arXiv.2502.13789

Summary

Keywords

artificial intelligence in education, blended learning, engineering education, information theory and coding, outcomes-based education

Citation

Yang X, Chen R and Chen Z (2026) AI-enabled blended teaching in Information Theory and Coding: an outcomes-based mixed-methods approach. Front. Educ. 11:1752893. doi: 10.3389/feduc.2026.1752893

Received

24 November 2025

Revised

07 February 2026

Accepted

24 February 2026

Published

13 March 2026

Volume

11 – 2026

Edited by

Fausto Ferreira, University of Zagreb, Croatia

Reviewed by

Tomislav Jagušt, University of Zagreb, Croatia

Tsvetelina Stefanova, GATE Institute, Bulgaria

Updates

Copyright

*Correspondence: Rong Chen, ; Xiaocui Yang,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *