[an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive]
[an error occurred while processing this directive]
信息科学与工程

面向学习轨迹的知识追踪预测模型

  • 张翼飞 ,
  • 张加金 ,
  • 关凯俊 ,
  • 张玉雪
展开
  • 沈阳航空航天大学 计算机学院,沈阳 110136

张翼飞(1976-),男,吉林白山人,教授,博士,主要研究方向:教育大数据、智慧教育技术,E-mail:

收稿日期: 2024-02-15

  网络出版日期: 2024-08-30

基金资助

国家自然科学基金(62102271)

A knowledge tracing prediction model for learning trajectories

  • Yifei ZHANG ,
  • Jiajin ZHANG ,
  • Kaijun GUAN ,
  • Yuxue ZHANG
Expand
  • College of Computer Science,Shenyang Aerospace University,Shenyang 110136,China

Received date: 2024-02-15

  Online published: 2024-08-30

摘要

基于transformer架构,提出一种面向学习轨迹的知识追踪预测模型(knowledge tracing prediction model for learning trajectories,LTKT),解决知识追踪领域使用transformer架构所存在的问题:网络中缺乏知识点信息、注意力被分散到众多关联较小的试题及忽略了学习能力在答题决策中的影响。LTKT在数据预处理阶段,采用教育领域的知识融通机制整合题目涉及的多个知识点,作为模型学习的一个信息维度。在编码器与解码器结构中,根据注意力呈现长尾分布的特点引入稀疏自注意力机制,并在其中嵌入包含绝对距离和相对距离的位置编码,使注意力集中在少数高度相似的试题上,同时加强模型对位置信息的感知。在预测策略上,使用双线性层融合学习能力特征与学生的知识状态,综合预测学生下一时刻的作答表现。在两个真实的大型公开数据集上进行实验,与其他优秀模型进行对比,结果显示LTKT的AUC有了明显提升。

本文引用格式

张翼飞 , 张加金 , 关凯俊 , 张玉雪 . 面向学习轨迹的知识追踪预测模型[J]. 沈阳航空航天大学学报, 2024 , 41(3) : 61 -70 . DOI: 10.3969/j.issn.2095-1248.2024.03.009

Abstract

Based on the transformer architecture,a knowledge tracing prediction model for learning trajectory was proposed, which solved the following problems in the field of knowledge tracing using the transformer architecture: the model lacked the learning of knowledge point information; the attention scores in the self-attention mechanism showed a long-tail distribution and required square computatio-nal overhead; the prediction strategy of the model lacked consideration of learnersability. In the data preprocessing stage, LTKT used the knowledge integration mechanism in the field of education to integrate multiple knowledge points involved in the subject, and the integrated knowledge formed was used as input to the model along with other learning trajectory information; LTKT introduced a sparse self-attention mechanism according to the characteristics of the long-tail distribution of attention scores into the encoder and decoder structure, and embedded a position encoding containing absolute distance and relative distance in it, so that the deep attention mechanism could also learn the position relationship between topics. In the prediction strategy, LTKT used the bilinear layer to fuse the learning ability features extracted by the learning ability extraction module and the output of the decoder to comprehensively predict the student's answer performance at the next moment. Experiments were carried out on two real large public datasets, and compared with other excellent models. The results show that LTKT has significantly improved the AUC.

[an error occurred while processing this directive]
1
张凯,覃正楚,刘月,等.多学习行为协同的知识追踪模型[J].计算机应用202343(5):1422-1429.

2
吴水秀,罗贤增,熊键,等.知识追踪研究综述[J].计算机科学与探索202317(7):1506-1525.

3
Corbett A T Anderson J R.Knowledge tracing:modeling the acquisition of procedural knowledge[J].User Modeling and User‑Adapted Interaction19944(4):253-278.

4
Piech C Bassen J Huang J,et al.Deep knowledge tracing[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems.Montreal:ACM,2015:505-513.

5
Zhang J N Shi X J King I,et al.Dynamic key‐value memory networks for knowledge tracing[C]//Proceedings of the 26th International Conference on World Wide Web.Perth:ACM,2017:765-774.

6
Pandey S Karypis G.A self‑attentive model for knowledge tracing[EB/OL].(2019-07-06)[2022- 03-10].

7
Nakagawa H Iwasawa Y Matsuo Y.Graph‑based knowledge tracing:modeling student proficiency using graph neural network[C]//IEEE/WIC/ACM International Conference on Web Intelligence.Thessaloniki:ACM,2019:156-163.

8
Lee U Park Y Kim Y,et al. Monacobert:monotonic attention based convbert for knowledge tra-cing[C]//International Conference on Intelligent Tutoring Systems.Cham:Springer Nature Switzerland,2024:107-123.

9
王璨,刘朝晖,王蓓,等.TCN-KT:个人基础与遗忘融合的时间卷积知识追踪模型[J].计算机应用研究202239(5):1496-1500.

10
欧阳子豪.学科核心素养的融通培养:现实诉求和基本策略[J].中国教育学刊2022(2):34-39,98.

11
Vaswani A Shazeer N Parmar N,et al.Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.Long Beach:ACM,2017:6000-6010.

12
Choi Y Lee Y Cho J,et al.Towards an appropriate query,key,and value computation for know-ledge tracing[C]//Proceedings of the Seventh ACM Conference on Learning @ Scale.Virtual Event:ACM,2020:341-344.

13
Shin D Shim Y Yu H,et al.SAINT+:integrating temporal features for EdNet correctness prediction[C]//11th International Learning Analytics and Knowledge Conference.Irvine:ACM,2021:490-496.

14
Zhou Y H Li X H Cao Y B,et al.LANA:towards personalized deep knowledge tracing through distinguishable interactive sequences[EB/OL].(2021-04-21)[2022-05-15].

15
Child R Gray S Radford A,et al.Generating long sequences with sparse transformers[EB/OL].(2019-04-23)[2023-06-02].

16
Zhou H Y Zhang S H Peng J Q,et al.Informer:beyond efficient transformer for long sequence time‑series forecasting[J].Proceedings of the AAAI Conference on Artificial Intelligence202135(12):11106-11115.

17
Ke G L He D Liu T Y.Rethinking positional encoding in language pre‑training[EB/OL].(2021-03-15)[2023-11-07].

18
ChoiYoungduck,LeeYoungnam, ShinDongmin,et al. EdNet:a large‑scale hierarchical dataset in education[EB/OL].(2019-12-06)[2023-10-15].

19
Ghosh A Heffernan N Lan A S.Context‑aware attentive knowledge tracing[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.Virtual Event:ACM,2020:2330-2339.

文章导航

/

[an error occurred while processing this directive]