Learning coding/design/AI

AI-enabled blended teaching in Information Theory and Coding: an outcomes-based mixed-methods approach


Summary

With the rapid development of generative AI (GAI), it is a necessity to explore its effective integration into engineering education. This exploratory study incorporates GAI into an outcome-based education (OBE) framework for Information Theory and Coding (ITC) engineering education using mixed methods analysis of design logic, implementation paths, and learning mechanism. Quantitative analysis revealed that a “Planning–Verification” pattern where the AI outputs are actively cross-validated had significantly higher learning than passive reliance, resulting in a “productive friction” in terms of the emergence of low-performance students and advanced students. By adding verification checkpoints, AI-assisted activities aligned to intended learning goals, encouraging critical thinking and academic integrity. The study provides a transferable design framework and evidence-based guidance for STEM educators to implement AI in engineering education.

1 Introduction

Having been evolved for the last three decades, outcome-based education (OBE) has moved from a policy rhetoric into a practice architecture in curriculum design, and an accreditation measures in higher education mostly particularly is engineering education(Asbari and Novitasari, 2024; Mahrishi et al., 2025). Additionally it is recognised that OBE has formed the best way wherein universities have constructed studying experiences by way of figuring out clear, assessable graduate attributes which might be knowledgeable by industry-standards. The shift from input-oriented schooling to an outcome-based paradigm has been promoted by accreditation organisations worldwide, which have adopted steady enchancment cycles by which graduates not solely obtain attainment of technical competence, but in addition downside fixing abilities, communication and appreciation of ethics in addressing complicated issues (Cruz et al., 2020; Syeed et al., 2022).

On this context, Info Concept and Coding (ITC) is a core space within the formation of engineers and knowledge scientists. ITC is a mathematical basis for the know-how of communication, knowledge compression and error-control. Basic topics equivalent to Shannon’s supply and channel coding theorems, entropy, mutual info, and error management codes are very summary in nature and require substantial mathematical reasoning (Kumar et al., 2023; Mielikäinen, 2022; Ling and Krishnasamy, 2023). Try to be aware of likelihood, linear algebra, discrete math and reasoning about algorithms as a way to get probably the most out of it. As a result of summary nature and cumulative format of ITC, a majority of undergraduates have struggled in ITC with continuous misconceptions, fragile switch of data, and disparate outcomes. Within the absence of well timed formative suggestions, such difficulties sometimes by no means are solved and have a tendency to cumulate, which in flip hinder college students’ capability to make use of idea for fixing actual engineering duties(Raza et al., 2024).

A proposed answer to this downside is mixed studying, which gives on-line modules which might be versatile in addition to face-to-face educating (Sala et al., 2023; Panday et al., 2025; Younas et al., 2025). On-line supplies can embody narrated derivations, visible simulations, adaptive quizzes and collaborative dialogue boards, whereas class time is used for sense-making and peer educating actions. With the usage of on-line studying analytics together with in-class formative evaluation, school can establish studying gaps sooner. Early intervention is vital, as errors at first can have an effect on subsequent content material (Tempelaar et al., 2024; Riegel, 2024).

The arrival of generative synthetic intelligence (GAI) in the previous couple of years is an additional factor on this hybrid strategy. Highly effective language fashions in addition to domain-specific instruments are anticipated to offer a wide range of explanations (elaborated, simple, tough), lead readers by way of step-by-step derivations(full proofs on-demand), generate tailor-made examples and workouts, provide MATLAB/Python code completion and dynamically detect computational or conceptual errors in real-time(Dwivedi et al., 2023; Kasneci et al., 2023; Pinto et al., 2024; Batista et al., 2024; Mo and Crosthwaite, 2025). In ITC, which requires correct language processing and the mixing of summary reasoning with computational abilities improvement alone, AI might operate as a dynamic scaffold, tailoring help to learners’ pacing and necessity (Mahrishi et al., 2025; Khawrin and Nderego, 2023).

However there are additionally dangers to such alternatives. There’s a danger that errors within the AI may lead college students to false conclusions, and overdependence on it might scale back their unbiased reasoning abilities; there are additionally questions on educational integrity (Vieriu and Petrea, 2025). Moral norms, equivalent to UNESCO’s suggestions on synthetic intelligence ethics, emphasize human supervision and transparency, bias correction and knowledge privateness. There are additionally fairness concerns because the uneven availability of AI-enabled units, dependable Web entry and AI literacy can widen achievement gaps, particularly in resource-poor faculties. Addressing these challenges indicate orienting the usage of AI to academic goals and institutional safeguards (Jobin et al., 2019; Floridi and Cowls, 2022; Holmes et al., 2022).

From an OBE viewpoint, the infusion of AI in educating ITC may be characterised by way of “End result–Exercise–Evaluation” correspondence, the place every deliberate final result is systematically related to a hybrid of AI-aided and conventional actions; that themselves are accompanied by assessments capturing reality-based proficiency (Rani et al., 2024; Alwakid et al., 2025; Shanto et al., 2025). This paper explores how AI-enabled duties foster deep studying by way of sound instruction design. These embody: utilizing AI visualization and peer educating to grasp entropy and mutual info; deepening the understanding of error-correcting codes by way of AI-aided debugging and instructor steerage; utilizing AI simulation to research efficiency and full educational defenses with conceptual and sensible grounds.(Kohen-Vacs et al., 2025; Amiri and Islam, 2025).

It’s assumed that if AI is elegantly built-in right into a purposeful OBE confirmatory-blended design then scholar efficiency and engagement will probably be improved, and utilization patterns reflecting earlier readiness, digital literacy and know-how entry may be revealed (Cao and Phongsatha, 2025; Ji et al., 2024; Mulenga and Shilongo, 2025). Some college students who’ve extra mathematical conditions and a larger diploma of AI literacy than their classmates will use it for deeper exploration whereas the others would depend on procedural help, and therefore justifying focused scaffolding and equity-focused interventions (Garcia Ramos and Wilson-Kennedy, 2024; MacCallum et al., 2024; Lai and Lin, 2025).

Primarily based on the above background, we design exploratory strategy to research design of GAI blended instruction in OBE and its influence on scholar efficiency. Specifically, we examine the match between design and studying habits. We intention to reply the next 4 questions:

  • RQ1: How does AI blended instruction align to achieving Intended Learning Outcomes (ILOs) in mathematics intensive ITC course?

  • RQ2: What are the dynamic changes in student participation time in pre-class, in-class and post-class?

  • RQ3: How different AI usage patterns (planning-verification vs. passive reliance) affect the learning performance of students with different prior proficiency?

  • RQ4: How do embedded verification checkpoints and reflective tasks stimulate students to think critically and aware of academic integrity during AI interaction?

This analysis intends to contribute towards the empirical grounding of accountable AI in engineering schooling. By means of a transferable design framework and evidence-based pointers, it goals to allow educators who wish to rework this strategy throughout STEM(Science, Expertise, Engineering, and Arithmetic) content material areas and completely different institutional contexts, utilizing what we learn about mindsets and overcoming the hole between AI’s potential and its sensible, moral, equitable implementation in actual school rooms.

2 Materials and methods

2.1 Research design

Since this paper is intended as design exploratory research, we take a sequential explanatory mixed-methods approach to investigate the use of GAI and OBE for blended teaching of an Information Theory and Coding course, considering instructional design logic, implementation pathways, and learning processes, while still being exploratory in observation rather than directly linking GAI use and learning outcomes.

From Figure 1, the analysis framework is designed as design-based exploratory inquiry into the combination of GAI into an OBE framework for ITC in Figure 1. Three phases are concerned within the framework: Part 1 (Prognosis and Design) is a baseline survey for AI-based scholar studying and AI literacy; Part 2 (Implementation and Intervention) is an AI-driven formative evaluation primarily based on course of knowledge to regulate instruction throughout all studying phases; Part 3 (Analysis and Reflection) is an AI-driven qualitative inquiry for scholar engagement and conceptual progress. It will be significant that the analysis section is fed again to refine theoretical development and sensible optimization of the AI-based blended educating mannequin.

2.2 Participants and instructional context

A total of 48 s-year students in Electronic Information Engineering from a naturally formed class were subjects (28 males, 58.3% and 20 females, 41.7%) in the present study. To cover the full spectrum of the cohort’s character, generic student background details and educational history such as gender profile, prior experience of AI use, and past learning experiences were captured (see Table 1). The scholars had all accomplished obligatory programs equivalent to Superior Arithmetic, Chance and Statistics, Digital Sign Processing, which promoted a normal degree of data in the beginning of additional research.

Variable Category n %
Major Electronic information engineering 48 100.0
Gender Male 28 58.3
Female 20 41.7
Year Sophomore 48 100.0
AI usage frequency Rare (<1 time/week) 10 20.8
Occasional (2–3 times/week) 22 45.8
Frequent (≥4 times/week) 16 33.4
Prior AI learning experience Yes 30 62.5
No 18 37.5

Sample size: We performed power analysis using G*Power 3.1 to determine the minimum sample size needed. Cohen (1988) really helpful medium impact measurement (d = 0.5) with significance degree α = 0.05 and goal energy 1 – β = 0.8, the required minimal pattern measurement is 44. Closing pattern of 48 has reached that degree. This ensures that the examine has enough functionality to check educating effectiveness and improves the reliability of statistical outcomes.

The aim of the ITC course is to provide students with a structured knowledge about the basic theory and tools in formulating problems as well as analyzing solutions, using information-based approaches. As they learn the subject, students master a number of fundamental concepts such as information entropy, Shannon channel capacity, and the source and channel coding theorems, as well as become acquainted with error-correcting codes that are used in practice. Such a pedagogical approach allows them to connect abstract theory with practical implementation and lay solid groundwork for follow-up courses and research activities.

The course was delivered in a blended mode comprising both online and face-to-face teaching, following an OBE approach. To inform teaching and assessment, four sets of Intended Learning Outcomes (ILOs) were enumerated including mastering the concept of ITC principles, performing encoding/decoding with performance analysis and optimization, designing and validating coding schemes for different channel conditions, and fostering academic integrity as well as professional responsibility in engineering practice as well as AI engagement.

2.3 Blended teaching and AI-enabled design

The instructional design involved three stages: pre-class preparation, in-class activities, and post-class follow-up. Based on OBE principles, the design integrated GAI tools in all stages to ensure a seamless learning experience for students. The goal is to align outcomes, activities and assessments in order to improve the learning process.

To ensure the service stability and the accuracy of data and to ensure the applicability of Chinese learning management system, we designed and implemented a localized generative artificial intelligence teaching solution based on the basic domestic learning management systems (Yuketang). Students choose specific AI tools such as DeepSeek, Zhipu AI (GLM), Ernie Bot, and Tongyi Lingma according to their learning tasks. Students conduct a variety of activities ranging from concept exploration, literature help, code debugging. All AI tools follow the structured guidelines embedded in the LMS and follow the closed loop process of “guidance within the platform, practice outside the platform, and return results within platform. Students must submit “AI assisted creation logs” including screenshots of key processes and reflections to LMS to record learning progress. We introduce CRIS prompt (Concept clarification, Role-based step-by-step guidance, Interaction debugging, and Simulation support). The prompt words help students to actively use AI-generated visual scripts to verify concepts and seek debugging ideas rather than direct answers. This transforms AI into a cognitive partner supporting its theory, empirical analysis and academic argumentation toward deep learning.

During pre-class, students interacted with the Learning Management System (LMS) to view introduction videos and engage with text materials, as well as use GAI to generate analogies, derivation prompts, and potential solution strategies for example problems. To focus preparation and encourage critical thinking, a brief quiz tested understanding and prompted students to consider the trustworthiness and constraints of AI-generated material.

In a follow-up in-class investigation of how these prompts might be directly instantiated to increase metacognitive support, students were organized into groups so that they could collaborate on group-based exercises that included using GAI-generated cases to help refine mathematical derivations, contrast alternative algorithmic approaches, and debug code implementations in MATLAB or Python. These group activities were reinforced with instant quizzes and Meta-Metacognitive Feedbacks (MMGFs) which did not only obviously promote students’ interaction with peers as well as their immediate feedback but also provided a visible measurement of the achievement of the intended learning outcomes under OBE in these three levels.

In the postclass-extension phase, students carried out their homework and small projects which incorporated AI-aided output into their independent labor. One assignment consisted of submitting reflective journals for the submission as a proof of how they applied AI tools, e.g., what methods were used to check accuracy of generated contents and what techniques were used for refinement. Instructors reviewed these submissions with the help of rubric that evaluated not only technical quality but also responsibility and ethics in AI use.

2.4 Data collection

Consistent with the mixed-methods design, data were collected from multiple sources to ensure triangulation and depth of insight

2.4.1 Quantitative data

A 24-item questionnaire was used to assess demographics and three constructs, with good internal consistency: AI Usage Patterns (i.e., “I use AI to break down complex steps” Cronbach’s α = 0.85) Perceived Studying Outcomes (i.e., “The AI instruments helped me perceive the idea of Channel Capability higher” Cronbach’s α = 0.89) and Ethics Consciousness (i.e., “I explicitly verify AI code with textbook formulae earlier than submission” Cronbach’s α = 0.82). This was supplemented by LMS information (go to frequency, time-on-task), in-class quiz scores, project grades, and pre/post-test outcomes.

2.4.2 Qualitative data

Information was gathered through classroom observations (focusing on tool use and interaction patterns), student reflective journals, written work, and teaching logs containing formative feedback from the instructional team.

Table 2 summarizes the systematic consistency between these knowledge sources and particular analysis questions.

Research question (RQ) Focal dimensions Quantitative data sources Qualitative data sources
RQ1: ILO alignment Achievement of learning outcomes; curricular coherence. Pre/post-tests; Assignment grades; Final project scores. Teaching logs; Formative feedback from the team.
RQ2: Temporal dynamics Student engagement across pre-, in-, and post-class stages. LMS analytics (clicks, time-on-task); Quiz completion rates. Classroom observations; Interaction patterns during inquiry.
RQ3: AI usage patterns Correlation between AI behavior (Planning vs. Reliance) and proficiency. 24-item questionnaire (Usage frequency/purpose); GPA/Prior grades. Student reflective journals; Samples of AI-assisted written work.
RQ4: Critical thinking Awareness of academic integrity; Effectiveness of verification checkpoints. Questionnaire (Perception of ethics/fairness). Reflective tasks; Dialogue logs between students and GAI.

Alignment of research questions, data sources, and analytical dimensions.

2.5 Instrument validation and quality control

The research instruments were validated through a multi-stage process. For the knowledge tests, items were designed based on the course syllabus and the four OBE learning outcomes, followed by an expert review to ensure content validity. To avoid memory effects while maintaining comparisons, pre and post test forms used parallel forms covering identical concepts with different problem cases. For skills and ethics rubrics, all subjective assignments and test items were graded by instructors blinded to the identities of students and submission time (pre vs. post). For the survey, a three-step process was used: pilot testing, expert panel review, and cognitive interviewing. Fifteen former ITC students filled in the instrument in a pilot phase to test the sequence of items and the understandability of technical terms. Two AI-supported instruction and OBE-oriented curriculum design experts then examined the questionnaire and suggested modifications. To enhance authenticity and understandability, cognitive interviews were employed with five current students to verify that AI related terms and situational settings were valid. For qualitative analysis, deductive and inductive thematic analysis was used; the deductive framework was based on the four OBE learning outcomes, while inductive coding was done to identify new themes. Two researchers coded the data independently and then compared their codes, discussing discrepancies and refining both coaches’ agreement (inter-rater) through an iterative process.

2.6 Data analysis

To ensure the evaluation of the AI-enabled blended teaching model, both quantitative and qualitative analyses were performed with more rigour.

2.6.1 Quantitative analyses

Descriptive statistics and inferences (paired-samples t-tests and correlation tests) were performed to study associations and differences between AI usage frequency, purposes and ILO achievement. To compensate for Type I errors due to multiple comparisons (four ILOs and multiple engagement indicators) Bonferroni correction was applied. Also, to model the interactions between student proficiency and AI usage, two-way ANOVA and hierarchical regression analysis were used to model the interaction and ensure the reliability of the results.

2.6.2 Qualitative analyses

Qualitative data from reflective journals and teaching logs were coded into thematic categories using combined deductive and inductive approaches. These patterns were combined with quantitative results to provide a general interpretation of how AI support worked at different stages. Sample coded excerpts and mappings between the thematic codes and research claims are provided in Section 3.6. Coding was iteratively refined to maintain high reliability between coders.

3 Results

The following section describes the learning outcomes from integrating GAI into the ITC course under OBE that are categorized in four dimensions: (a) acquisition of OBE-compliant competency; (b) students’ engagement at levels of blended instruction; (c) usage patterns generated by using AI and its association to learning performance based on evidence, and (d) emergence of explanations regarding how effects were observed. Quantitative results are presented prior to qualitative and finally a mixed synthesis of both strands of evidence.

To summarize students’ baseline characteristics, Table 3 summarizes the paired-samples t-test outcomes of scholar efficiency on 4 ILOs. To keep away from Sort I errors from a number of comparisons, we utilized a Bonferroni correction. Outcomes present statistically important enchancment of all ILOs following intervention: conceptual understanding [t(47) = 4.35, p < 0.001, Cohen d = 0.63], abilities and software [t(47) = 4.10, p < 0.001, Cohen d = 0.59], downside fixing and innovation [t(47) = 4.50, p < 0.001, Cohen d = 0.65], ethics {and professional} competence [t(47) = 3.60, p < 0.001, Cohen d = 0.52]. All results exceeded medium (d > 0.50), suggesting constructive affiliation between AI enhanced blended educating mannequin to scholar good points.

Intended learning outcome (ILO) Pre-test, Mean (SD) Post-test, Mean (SD) Mean Difference t(df) p worth Cohen‘s d
ILO1 (Conceptual understanding) 76.2 (8.4) 83.5 (8.7) +7.3 4.35 (47) < 0.001 0.63
ILO2 (Skills and application) 74.5 (7.6) 80.2 (8.0) +5.7 4.10 (47) < 0.001 0.59
ILO3 (Problem-solving and innovation) 73.1 (8.2) 81.0 (8.6) +7.9 4.50 (47) < 0.001 0.65
ILO4 (Ethics and professionalism) 74.0 (7.8) 78.5 (8.1) +4.5 3.60 (47) 0.001 0.52

Paired-samples t-test outcomes for the 4 meant studying outcomes (ILOs) earlier than and after the academic intervention.

p-values had been interpreted utilizing the Bonferroni correction to account for a number of comparisons throughout 4 ILOs. The adjusted significance threshold was set at α = 0.05/4 = 0.0125. All reported p-values stay statistically important after correction.

3.1 OBE-aligned learning outcomes

3.1.1 Knowledge and understanding (ILO1)

Follow-up analyses showed that potential learning gains were maintained on several dimensions. Conceptual knowledge of students was improved, and they showed increased performance in basic themes of Information Theory and Coding like mutual information, entropy and channel capacity. These benefits were found particularly for reasoning-based and deep comprehension tasks as opposed to mere recall. Transfer to other stimuli was also enhanced, as evidenced by better performance in near-transfer tasks at the level of formula capacity with different levels of signal-to-noise ratio. This indicates that the pre-class structure supported not just short term prompting but rather concept reorganization. Reflective journal entries also demonstrated a reduction in common misconceptions. For example, the widely popular belief that increased redundancy necessarily lowers error rates was successively superseded by mechanism-based accounts which captured the trade-offs among coding rate code distance and noise.

The ceiling effect: The average pre-test score of concepts is 76.2 (Standard Deviation: SD = 8.4), representing medium to high baseline level and possibly ceiling effect. However, in further analysis of scores, the score improvement after tests is mainly based on medium and difficult questions (cognitive level of application and analysis), and the average score for high difficulty items rises from 52.3 to 68.7. The intervention promoted deep learning and high order cognitive engagement, rather than surface knowledge acquisition.

3.1.2 Skills and application (ILO2)

The evidence after completion of these interventions has indicated significant improvement in gross procedural and analytical competency. Furthermore, in assignments, students made fewer algebraic errors while deriving basic decoders (i.e., ML and MAP) and the steps to arrive at solutions appeared more systematic, indicating greater procedural fluency. Computational skill also increased as MATLAB/Python submissions gained a clear modular structure separating encoding and decoding from channel simulation, contained better test cases for the same code, turned disparate scripts into structured reusable workflows, which led to both more efficient as well as maintainable designs. Furthermore, more groups did systematic performance analysis such as plotting BER vs. Eb/N0 at various code rates and reinforcing their design decisions by engineering considerations with better analytical depth.

3.1.3 Problem-solving and innovation (ILO3)

The results of the project indicated increasing levels of maturity in design thinking and responsibility for their products. Further, proposals evidenced broader decision making through context dependency, e.g., choosing between convolutional and block codes based on latency and complexity constraints, indicating greater capacity in defining problems and controlling design bounds. Students also reported iterative processes that began with baseline designs, moved on to tool-suggested strategies, and then to empirical investigations and fine-tuning of their solutions as evidence that although external scaffolding facilitated idea generation, it was the empirical confirmation that informed final decisions. Additionally, they tweaked the proposed prototype designs with independent changes such as parameter tuning and numerous decoding strategies, and justified these divergences through ablation tests, which simultaneously reflected originality and accountability.

3.1.4 Ethics and professionalism (ILO4)

Reflections provided evidence of the increased consciousness of transparency, evaluation and fairness related to technology use following the intervention. Students’ journals significantly more often stated or recorded external tools, prompts, versions and functional roles. They also made a clearer distinction between generated content and their own original work. They also provided evidence of more active critical verification (finding wrong outputs, such as fake theorems or wrong inequality signs) and keeping traces of verification in a form of cross-checks with another textbook and simulation-based counterexamples. Concerns about equity were raised in classroom discussions, as there was a danger that students who used the tool more would have an advantage over those who did not, and it was unclear where legitimate help ended and excessive support began for technical homework.

In order to provide the facilitatory aspect of instructional interventions within OBE framework in a more globalized form, the quantitative trends as exuded by Table 3 reveal solely superficial variations, although ILOs attainment charges do make it doable once we examine to be approximated utilizing statistics. The inherent studying behaviors, nature of mechanisms and typical outputs for every ILO stay nonetheless to be extra interconnected. To handle this, Table 4 proposes a mapping construction between anticipated studying outcomes, quantitative proxies, qualitative proof and scholar works as a way to align the core efficiency dimensions of the ILOs classes by way of a number of knowledge sources.

Intended learning outcomes (ILOs) Quantitative indicators Qualitative themes Representative student outputs
ILO1: Knowledge and Understanding Ability to accurately explain and comprehend fundamental concepts, theorems, and performance limits in ITC, and to extend understanding to new contexts Knowledge test scores
(Imply, Commonplace Deviation)
Understanding of basic mechanisms; clear recognition of AI design principles; correction of misconceptions Knowledge test results; course notes; reports on AI design principles
ILO2: Skills and Application
Skill to assemble and analyze system fashions in ITC, and to conduct simulation and system design
Assignment scores
(Imply, Commonplace Deviation)
System modeling and simulation; ability to use tools for model construction and testing System modeling reports; simulation outputs
ILO3: Problem-Solving and Innovation Ability to identify and analyze problems in ITC and propose innovative solutions Project scores
(Imply, Commonplace Deviation)
Development of innovative solutions; contextually appropriate design choices; detailed evaluation Project design reports; presentations of innovative solutions
ILO4: Ethics and Professionalism
Skill to establish moral points in engineering follow and conduct evaluative judgments
Ethics test scores
(Imply, customary deviation)
Ethical judgment and professional responsibility; transparent attribution; fairness considerations Ethics test results; analytical reports on ethical issues

Quantitative indicators, qualitative themes, and representative student outputs corresponding to the four intended learning outcomes (ILOs).

3.2 Student engagement across stages of blended teaching

Analysis of learning activities indicated gains in both the pre-, in- and after-class phases. LMS data also showed that the concept-check questions prompted students to access the introductory videos and other preparatory resources more often, prolonging their time on task and enhancing pre-class preparation. Participation was more even in group work, and guiding questions calling for critiques of derivations elicited peer explanations and encouraged error identification. Following the class, reflections in their journals became more detailed with students making explicit what informed their understanding and uncertainty, as well as how they tackled these uncertainties; also, a greater number of students voluntarily submitted subsequent supplementary sensitivity analyses down to the factorisation(s), revealing a higher level of engagement and consolidation.

In order to have a more systematic view on the behavioral performance of students in various stages of blended instruction, it should be first of all showed their baseline distributions on major factors. Table 5 presents descriptive statistics of software use frequency, course engagement, and self-efficacy, that are then used as a foundation for inspecting subsequent adjustments behaviourally and mechanisms beneath them. Figure 2 exhibits the time-varying habits of scholar engagement within the three blended instruction phases. As proven, engagement began from the baseline within the pre-class section (M = 75, SD = 8), then quickly reached the very best degree within the in-class section (M = 87, SD = 7). Engagements decreased within the put up class section (M = 80, SD = 9), however had been considerably greater than that within the pre-class section, suggesting that the in-class interplay stimulated curiosity that carried over into post-class actions. Paired pattern t exams (Table 6) confirmed that the in-class engagement was larger than pre-class (t = 9.85, p < 0.001, Cohen’s d = 1.58) and post-class (t = −5.71, p < 0.001, Cohen’s d = 0.88). Put up-class engagement was greater than pre-class (t = 3.87, p < 0.001, Cohen’s d = 0.61).

Variable M SD Min
AI use frequency (times/week) 3.46 1.21 1
AI use duration (minutes/week) 92.15 35.42 30
Perceived learning effectiveness 4.12 0.58 3
Course engagement (1–5 scale) 4.05 0.64 2.5
Programming self-efficacy 3.88 0.71 2

Descriptive statistics of key variables (N = 48).

M = Imply; SD, customary deviation.

Comparison Mean ± SD Mean Difference t(47) p Cohen’s d
In-class vs. Pre-class 87.0 ± 7.0 vs. 75.0 ± 8.0 12.00 9.85 < 0.001 1.58
In-class vs. Post-class 87.0 ± 7.0 vs. 80.0 ± 9.0 −7.00 −5.71 < 0.001 0.88
Post-class vs. Pre-class 80.0 ± 9.0 vs. 75.0 ± 8.0 5.00 3.87 < 0.001 0.61

Descriptive statistics of key variables (N = 48).

Given the large in-to-pre-class engagement effect (d = 1.58) we thought of two different doable explanations: Hawthorne results and teacher bias. No correlation between teacher intervention and engagement (r = 0.13, p = 0.36). Engagement continued to stay steady all through the in-class section with no important distinction between early and late phases Mdiff = 1.0 (95% confidence interval (CI) [−1.1, 3.1], t = 0.98, p = 0.33). The small CI round zero signifies no decline over time. Thus, we imagine the AI-aided duties offered real-time cognitive scaffolding for participation moderately than momentary observational results.

These results show that in-class interaction is crucial to increase engagement and that the instructional effects can last beyond class sessions. In addition, the increase from in class to post class shows that group activities and reflective tasks became more stimulating and intellectually challenging, which helped sustain students’ commitment in and out of class. This pattern of results is consistent with the “instructional coherence hypothesis,” that learning trajectories are more stable and engagement is higher when the interventions are conceptually coherent across pre-, in-, and post-class.

3.3 Quantitative analysis of AI usage patterns and learning outcomes

Student tool use logs revealed that AI was used primarily for conceptual clarification, stepwise derivation guidance and debugging. From this, there emerged a critical Planning–Verification pattern, where students who used the tool to request problem-solving outline and independently verified details were likely to learn better than those who practiced passive reliance. To model this observation, a two-way ANOVA was performed. We found that the AI usage pattern (Planning–Verification vs. passive reliance) had a significant effect on student proficiency (low, medium, high) on post-test scores [F(2, 42) = 4.42, p = 0.017,ηp2 = 0.17]. This outcomes point out statistically that the hyperlink between AI utilization and studying outcomes varies with college students’ proficiency.

Figure 3 exhibits the impact of Scholar Proficiency (Low, Medium, Excessive) and AI Utilization Sample on post-test scores. Within the determine, “Energetic Use” sample represents “Planning–Verification” mode. The lively sample was related to constantly greater post-test scores than passive reliance in all proficiency ranges. Imply scores for lively customers had been round 75 for low-proficiency teams, 85 for medium-proficiency teams and practically 90 for high-proficiency teams. In distinction, passive customers scored considerably decrease in every corresponding group. The outcomes additionally present that profit from lively Planning–Verification sample was biggest amongst high-proficiency college students. These outcomes quantitatively present that lively use of AI outcomes results in higher educational efficiency moderately than passive consumption of its outputs.

3.4 Qualitative mechanisms explaining the observed effects

The pedagogical designs employed in the current study seemed to facilitate deep learning by a convergence of various activity systems. Productive friction, established by the necessity to revise derivations, induced “desirable difficulties” that facilitated conceptual change and procedural accuracy. This is consistent with the observed ‘Planning–Verification’ pattern where students did not just passively accept AI outputs, but did active cognitive reappraisal. One student wrote in his journal, “The AI laid out steps, but it felt too direct. I checked the simulations and realized that AI model does not account for a particular edge case. Finding the difference was when I learned the principle, not when I saw the answer.” This is vividly illustrated by productive friction. In this case, friction of verification, rather than passive acceptance, was the catalyst for deeper learning.

Externalized reasoning guided by structured templates (articulating assumptions, offering method, justifying steps and predicting ways things might go wrong) prompted students to express their thinking in this problem-solving process an important feature that helps peers and instructors diagnose students’ misconceptions more effectively. Normalization was supported by repeated attribution checks to distinguish between external help and learning, student work, and validation. This procedure also increased transparency and decreased ambiguity about what constituted appropriate levels of assistance, while redirecting focus toward verification and reflection. Instructor regulation was ultimately facilitated through that time extended phases of tool use (ideation, critique, and testing) along with grading criteria weighted toward validation plus reflection. Such interventions enable students to shift from obtaining answers to coping with the nuances of concepts.

3.5 Integration of quantitative and qualitative evidence

Synthesis of the results provides a more holistic view of student learning. The significant quantitative interaction effects between AI usage patterns and proficiency (Section 3.3) are directly correlated with qualitative effects (Section 3.4). We conclude that the “Planning-Verification” pattern is the behavioral expression of “productive friction” and “active cognitive reappraisal” which suggests that the better learning outcomes are not coincidental but are related to deeper cognitive engagement.

In addition, the whole data show that learning was diverse. Beyond the post-test scores, the data showed improvements in practical and professional aspects, as projects were more original and students tended to be more ethically responsible. This suggests that the intervention was associated with many positive learning outcomes.

3.6 Robustness and sensitivity checks

The protocol analysis disclosed several patterns that were consistently represented in instructional contexts. Whenever digital tools were described in terms of their support for planning and verification, students from all levels of school mathematics and programming contributed similarly regardless of their software experience; by contrast the effects were attenuated where reliance on the tool for answers directly was more prominent. Cross-validated evidence from tests, assignments and reflexive journals served to triangulate findings in a cumulative nature and reduce mono-method bias, whereas cases of high test performance with little reflection revealed domains for targeted teaching feedback. Furthermore, the greatest increases in learning were observed in modules where supportive resources permeated across pre-class, in-class, and post-class with compulsory checking opportunities, and to have a potential threshold effect of instructional coherence. It is important, however, to interpret these findings within the context of the study’s design. The study was conducted in a single cohort without control or comparison group. Since these results suggest that teaching methods can be strongly linked to educational achievements, they do not prove causality.

4 Discussion

This paper investigated the use of AI tools in blended OBE-based education in a Information Theory course. The results section presented quantitative and qualitative data on the effects of the intervention on student learning and outcomes. The results section now interprets these results directly to answer the four questions introduced in the introduction, contextualizes them with the existing literature and presents key implications for teaching.

4.1 Alignment of AI-enabled design with intended learning outcomes (RQ1)

Our first question was how the AI-based design correlated with the achievement of ILOs. Our results suggest that we have a strong and broad alignment supporting the view that combining digital innovation with OBE principles is a promising way of reforming curriculums (Shanto et al., 2025; Mulenga and Shilongo, 2025). The quantitative outcomes (Table 3) confirmed statistically important enchancment in all 4 ILOs with conceptual understanding and downside fixing having the most important impact sizes (d > 0.6). The “Planning-Verification” sample present in Part 3.3 confirmed greater post-test rating and confirmed that the design supported the acquisition of the core conceptual data. Extra originality of the challenge and higher moral attributions (Part 3.5) confirmed that the intervention additionally supported greater order ILOs associated to sensible software, innovation {and professional} conduct (Part 3.5). This exhibits that well-structured AI interventions can help deep, outcome-based studying.

4.2 Temporal dynamics of student engagement (RQ2)

The second question was what happens between different stages of student engagement. Detailed results from Sections 3.2 and 3.6 show that engagement was not a series of events but a continuous flow with in-class activities acting as peaking post-class engagement. The significant increase in engagement from pre-class to in-class (d = 1.58), and sustained excessive degree post-class help the “tutorial coherence speculation”. As mentioned in Part 3.6, the most important studying will increase occurred when help assets flooded Pre-class, In-Class and Put up-Class. Pre-class AI-aided preparation considerably elevated readiness for summary ideas (e.g., entropy) whereas post-lesson reflection duties targeted on error prognosis initiated metacognitive regulation, much like current AI-aided error evaluation (Zhang et al., 2025). This implies that AI instruments are very efficient if they’re built-in seamlessly right into a steady cycle (Wong, 2024).

4.3 Association of AI usage, proficiency, and performance (RQ3)

The third important question is how different AI patterns affect learning performance of students with different proficiency levels. Our ANOVA results (Section 3.3) showed how a significant interaction effect exists [F(2, 42) = 4.42, p = 0.017] for lively “Planning-Verification” use and that it creates “productive friction”. For weaker backgrounds, AI scaffolding helped them overcome “hole limitations” and interact in complicated derivations. For strong backgrounds, AI scaffolding was used to “take a look at limits, distinction on paper idea and simulation output and discover new design options”. That is vital as a result of it means that AI instruments can operate as differential help techniques (scaffold novices and prolong consultants) which is in line with equity-driven AI schooling (Garcia Ramos and Wilson-Kennedy, 2024; Kohen-Vacs et al., 2025).

4.4 Fostering critical thinking and academic integrity (RQ4)

Finally, our fourth research question was how embedded checkpoints and reflective tasks promoted critical thinking and academic integrity. Section 3.4 provides the direct answer. Structured templates encouraged “external reasoning” by requiring students to articulate and justify their thinking rather than accepting AI outputs. Repeated attribution checks and grading criteria weighted toward validation directly addressed academic integrity. As seen in Section 3.5, these measures led to more consistent ethical attributions and more clear distinction between generated content and original work. This transforms the AI from a cheating tool into a tool for teaching responsible scholarship (Bittle and El-Gayar, 2025). Giant scale implementation requires not solely technological adoption but in addition a reframing after all design wherein college students are explicitly requested to confess and justify exterior enter.

4.5 Implications, limitations, and future directions

The results suggest that effective AI integration requires the shift from viewing tools as “add-ons” to thinking of tools as integral parts of a “productive friction” learning system. However, our approach is not without limitations. We stated explicitly in Section 3.6 that the single cohort design without control group makes our results strongly association-not causality. Future work should use experimental or quasi-experimental designs. We emphasized the benefits but we need more attention in governance issues such as “dodgy outputs” and over-reliance, and we need task structure and validity checks in the design (Lund et al., 2025; Park and Doo, 2024).

5 Conclusion

In conclusion, the integration of AI-enabled technologies within a blended, outcomes-based framework for mathematics-intensive engineering courses holds great promise for supporting conceptual understanding, procedural skills development and student autonomy. The evidence suggests that the value of such tools does not simply derive from providing expedient answers, but rather from scaffolding “productive friction”—guiding students to bridge abstractions and instantiations through a structured “Planning-Verification” process. This approach promotes increased equity while upholding the demands for rigor. As observed, the differential effects across the proficiency continuum confirm that when well-managed, these interventions can reduce learning barriers for at-risk learners while simultaneously pushing the exploratory boundaries for more advanced ones.

However, the effectiveness of this strategy relies on instructional models that guard against passive reliance and impeded critical thinking. This is risk that can be mitigated by building in verification checks, setting clear guidelines for integrity and offering avenues of demonstrable authentic understanding for learners. Incorporated into a loop of pre-class preparation, guided discovery, critical evaluation and reflective integration, GAI serve as enablers of deeper levels of deeper cognitive engagement rather than shortcuts to solutions.

Although our results strongly link this design to positive learning results, future experiments may be needed to prove that. In general, we wish to continue cross-disciplinary trials to refine the teaching models which fit international standards and local realities to promote scientific and ethical advances of technology-enhanced engineering education.

Statements

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics assertion

Ethical approval was not required for the study involving humans in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required from the participants or the participants’ legal guardians/next of kin in accordance with the national legislation and the institutional requirements.

Writer contributions

XY: Investigation, Validation, Methodology, Conceptualization, Writing – review & editing, Supervision, Software, Resources, Visualization, Funding acquisition, Formal analysis, Project administration, Writing – original draft, Data curation. RC: Writing – original draft, Conceptualization, Data curation, Visualization. ZC: Project administration, Resources, Software, Investigation, Writing – original draft.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This research was funded by the provincial project of Jiangxi Provincial Higher Education Teaching Reform Research, grant number JXJG-23-23-20.

Battle of curiosity

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI assertion

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Writer’s notice

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abbreviations

OBE, Outcomes-Based Education; ITC, Information Theory and Coding; STEM, Science, Technology, Engineering, and Mathematics; GAI, Generative Artificial Intelligence; ILOs, Intended Learning Outcomes; LMS, Learning Management System; MMGFs, Meta-Metacognitive Feedbacks; SD, Standard Deviation.

References

  • AlwakidW. N.DahriN. A.HumayunM.AlwakidG. N. (2025). Exploring the role of AI and teacher competencies on instructional planning and student performance in an outcome-based education system. Systems13:517. doi: 10.3390/systems13070517

  • AmiriS. M. H.IslamM. M. (2025). Enhancing Python programming education with an AI-powered code helper: design, implementation, and impact. Softw. Eng.11, –17. doi: 10.11648/j.se.20251101.11

  • AsbariM.NovitasariD. (2024). Outcome-based education model: its impact and implications for lecturer creativity and innovation in higher education. Int. J. Social Management Studies5, 2231. doi: 10.5555/ijosmas.v5i5.447

  • BatistaJ.MesquitaA.CarnazG. (2024). Generative AI and higher education: trends, challenges, and future directions from a systematic literature review. Information15:676. doi: 10.3390/info15110676

  • BittleK.El-GayarO. (2025). Generative AI and academic integrity in higher education: a systematic review and research agenda. Information16:296. doi: 10.3390/info16040296

  • CaoS.PhongsathaS. (2025). An empirical study of the AI-driven platform in blended learning for business English performance and student engagement. Lang. Test. Asia15:39. doi: 10.1186/s40468-025-00376-7

  • CohenJ. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.

  • CruzM. L.Saunders-SmitsG. N.GroenP. (2020). Evaluation of competency methods in engineering education: a systematic review. Eur. J. Eng. Educ.45, 729757. doi: 10.1080/03043797.2019.1671810

  • DwivediY. K.KshetriN.HughesL.SladeE. L.JeyarajA.KarA. K.et al. (2023). Opinion paper: “so what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag.71:102642. doi: 10.1016/j.ijinfomgt.2023.102642

  • FloridiL.CowlsJ. (2022). “A unified framework of five principles for AI in society,” in Machine Learning and the City: Applications in Architecture and Urban Design, ed. CartaS.. (Cham: Springer), 535545.

  • Garcia RamosJ.Wilson-KennedyZ. (2024). Promoting equity and addressing concerns in teaching and learning with artificial intelligence. In Front. Educ. (Vol. 9:1487882). Frontiers Media SA. doi: 10.3389/feduc.2024.1487882

  • HolmesW.Porayska-PomstaK.HolsteinK.SutherlandE.BakerT.ShumS. B.et al. (2022). Ethics of AI in education: towards a community-wide framework. Int. J. Artif. Intell. Educ.32, 504526. doi: 10.1007/s40593-021-00239-1

  • JiH.SuoL.ChenH. (2024). AI performance assessment in blended learning: mechanisms and effects on students’ continuous learning motivation. Front. Psychol.15:1447680. doi: 10.3389/fpsyg.2024.1447680,

  • JobinA.IencaM.VayenaE. (2019). The global landscape of AI ethics guidelines. Nat. Mach. Intell.1, 389399. doi: 10.1038/s42256-019-0088-2

  • KasneciE.SeßlerK.KüchemannS.BannertM.DementievaD.FischerF.et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ.103:102274. doi: 10.1016/j.lindif.2023.102274

  • KhawrinM. K.NderegoE. (2023). Opportunities and challenges of AI towards education: a systematic literature review. Int. J. Language Research Education Studies13, 266271.

  • Kohen-VacsD.UsherM.JansenM. (2025). Integrating generative AI into programming education: student perceptions and the challenge of correcting AI errors. Int. J. Artif. Intell. Educ.35, 31663184. doi: 10.1007/s40593-025-00496-4

  • KumarN.KediaD.PurohitG. (2023). A review of channel coding schemes in the 5G standard. Telecommun. Syst.83, 423448. doi: 10.1007/s11235-023-01028-y

  • LaiC. H.LinC. Y. (2025). Analysis of learning behaviors and outcomes for students with different knowledge levels: a case study of intelligent tutoring system for coding and learning (ITS-CAL). Appl. Sci.15, 124. doi: 10.3390/app15041922

  • LingL. S.KrishnasamyS. (2023). Information technology capability (ITC) framework to improve learning experience and academic achievement of mathematics in Malaysia. Electron. J. E-Learn.21, 3651. doi: 10.34190/ejel.21.1.2169

  • LundB. D.LeeT. H.MannuruN. R.ArutlaN. (2025). AI and academic integrity: exploring student perceptions and implications for higher education. J. Acad. Ethics23, 15451565. doi: 10.1007/s10805-025-09613-3

  • MacCallumK.ParsonsD.MohagheghM. (2024). The scaffolded AI literacy (SAIL) framework for education: preparing learners at all levels to engage constructively with artificial intelligence. He Rourou1:23. doi: 10.54474/herourou.v1i1.10835

  • MahrishiM.RamakrishnaS.HosseiniS.AbbasA. (2025). A systematic literature review of the global trends of outcome-based education (OBE) in higher education with an SDG perspective related to engineering education. Discov. Sustain.6, 121. doi: 10.1007/s43621-025-01496-z

  • MielikäinenM. (2022). Towards blended learning: stakeholders’ perspectives on a project-based integrated curriculum in ICT engineering education. Ind. High. Educ.36, 7485. doi: 10.1177/0950422221994471

  • MoZ.CrosthwaiteP. (2025). Exploring the affordances of generative AI large language models for stance and engagement in academic writing. J. Engl. Acad. Purp.75:101499. doi: 10.1016/j.jeap.2025.101499

  • MulengaR.ShilongoH. (2025). Hybrid and blended learning models: innovations, challenges, and future directions in education. Acta Pedagog. Asiana4, 113. doi: 10.53623/apga.v4i1.495

  • PandayA.RayT.JalandharachariA. S.GopinathG. (2025). Insights into blended learning research: a thorough bibliometric study. Discov. Educ.4, 120. doi: 10.1007/s44217-025-00439-0

  • ParkY.DooM. Y. (2024). Role of AI in blended learning: a systematic literature review. Int. Rev. Res. Open Distrib. Learn.25, 164196. doi: 10.19173/irrodl.v25i1.7566

  • PintoG.De SouzaC.RochaT.SteinmacherI.SouzaA.MonteiroE. (2024). “Developer experiences with a contextualized AI coding assistant: usability, expectations, and outcomes,” in Proceedings of the IEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI, eds. KästnerC.YonekiE.. (Piscataway, NJ: IEEE), 8191. doi: 10.1145/3649323.3649363

  • RaniS.KaurG.DuttaS. (2024). “Educational AI tools: a new revolution in outcome-based education,” in Explainable AI for Education: Recent Trends and Challenges, eds. RaniS.KaurG.DuttaS.. (Cham: Springer Nature Switzerland), 4360.

  • RazaK.LiS.ChuaC. (2024). A conceptual framework on imaginative education-based engineering curriculum. Sci. Educ.33, 923936. doi: 10.1007/s11191-022-00415-2

  • RiegelC. (2024). “Leveraging online formative assessments within the evolving landscape of artificial intelligence in education,” in Assessment Analytics in Education: Designs, Methods and Solutions, eds. SahinM.IfenthalerD.. (Cham: Springer International Publishing), 355371.

  • SalaR.MaffeiA.LjubićS.SkokiA.PezzottaG.ZammitJ. P.et al. (2023). Blended learning in the engineering field: a systematic literature review. Comput. Appl. Eng. Educ.31:e22712. doi: 10.1002/cae.22712

  • ShantoS. S.AhmedZ.JonyA. I. (2025). A proposed framework for achieving higher levels of outcome-based learning using generative AI in education. Educ. Technol. Q., 2025(1), 115. doi: 10.55056/etq.788

  • SyeedM. M.ShihavuddinA. S. M.UddinM. F.HasanM.KhanR. H. (2022). Outcome-based education (OBE): defining the process and practice for engineering education. IEEE Access10, 119170119192. doi: 10.1109/access.2022.3219477

  • TempelaarD.RientiesB.GiesbersB. (2024). Dispositional learning analytics and formative assessment: an inseparable twinship. Int. J. Educ. Technol. High. Educ.21:57. doi: 10.1186/s41239-024-00489-8

  • VieriuA. M.PetreaG. (2025). The impact of artificial intelligence (AI) on students’ academic development. Educ. Sci.15:343. doi: 10.3390/educsci15030343

  • WongK. K. (2024). “Blended learning and AI: enhancing teaching and learning in higher education,” in International Conference on Blended Learning, eds. CheungS. K. S.LeeL.-K.LiK. C.FuJ. C. K.Trausan-MatuS.. (Singapore: Springer Nature Singapore), 3961.

  • YounasM.El-DakhsD. A. S.JiangY. (2025). Knowledge construction in blended learning and its impact on students’ academic motivation and learning outcomes. Front. Educ.10:1626609. doi: 10.3389/feduc.2025.1626609

  • ZhangY. F.LiH.SongD.SunL.XuT.WenQ. (2025). From correctness to comprehension: AI agents for personalized error diagnosis in education. arXiv preprint arXiv:2502.13789. doi: 10.48550/arXiv.2502.13789

Abstract

Keywords

artificial intelligence in education, blended learning, engineering education, information theory and coding, outcomes-based education

Citation

Yang X, Chen R and Chen Z (2026) AI-enabled blended teaching in Information Theory and Coding: an outcomes-based mixed-methods approach. Entrance. Educ. 11:1752893. doi: 10.3389/feduc.2026.1752893

Received

24 November 2025

Revised

07 February 2026

Accepted

24 February 2026

Published

13 March 2026

Quantity

11 – 2026

Edited by

Fausto Ferreira, College of Zagreb, Croatia

Reviewed by

Tomislav Jagušt, College of Zagreb, Croatia

Tsvetelina Stefanova, GATE Institute, Bulgaria

Updates

Copyright

*Correspondence: Rong Chen, ; Xiaocui Yang,

Disclaimer

All claims expressed on this article are solely these of the authors and don’t essentially characterize these of their affiliated organizations, or these of the writer, the editors and the reviewers. Any product which may be evaluated on this article or declare which may be made by its producer will not be assured or endorsed by the writer.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *