Qiutong Wang
Shanghai World Foreign Language Academy,Shang Hai, CN
Renaissance 2025, 1(03); https://doi.org/10.70548/ra142143
Submission received: 8 January 2025 / Revised: 27 January 2025 / Accepted: 23 February 2025 / Published: 17 March 2025
Abstract
Traditional chemistry lab instruction often neglects students’ diverse learning needs, reducing engagement and skill acquisition. This study explores DeepSeek, an AI-driven assistant, for personalizing lab education using the VARK model (Visual, Aural, Read/Write, Kinesthetic). In a 6-week quasi-experiment with 120 high school students grouped by VARK styles, DeepSeek provided tailored support—such as 3D animations for Visual learners and virtual simulations for Kinesthetic learners—evaluated through pre- and post-tests on performance, accuracy, behavior, and satisfaction. Findings showed Visual and Kinesthetic learners gained the most, benefiting from effective visual and interactive tools, while Aural and Read/Write learners showed limited improvement, highlighting weaknesses in voice and text support. The “AI-Learning Style” framework advances personalized education theory by quantifying adaptability, and the “Three-Dimensional Adaptability Model” guides smart lab tool optimization. Constrained by sample size and duration, future research should refine multimodal approaches and expand contexts to enhance AI’s role in precise, intelligent lab instruction.
KEYWORDS:VARK Model, Personalized Education, Artificial Intelligence, Chemistry Lab, Learning Styles, DeepSeek
1 Introduction
1.1 Research Background
Chemistry lab courses are vital in science education, linking theory to practice and fostering skills like critical thinking. At the high school level, they aim to ignite scientific interest through hands-on practice. However, traditional uniform teaching struggles to address diverse learning needs, limiting student potential and engagement. The VARK model (Fleming & Mills, 1992) classifies learners as Visual (preferring images), Aural (favoring discussions), Read/Write (liking text), and Kinesthetic (needing hands-on activity). In labs, Visual learners grasp phenomena visually, while Kinesthetic learners thrive with manipulation, yet conventional methods—demonstrations and written guides—often ignore these differences, yielding inconsistent outcomes.
AI advancements, like DeepSeek, offer personalized learning potential with adaptive, multimodal tools (e.g., animations, virtual labs). However, prior research focuses on knowledge delivery, rarely exploring AI’s adaptability to learning styles in lab settings, a gap this study addresses.
1.2 Research Significance
1.2.1 Theoretical Value
This study leverages the VARK learning style model to thoroughly investigate how DeepSeek adapts to the diverse needs of students in chemistry laboratory learning, culminating in the development of an”AI-Learning Style”adaptability evaluation framework. By quantifying the alignment between AI tools and learning styles, this framework enriches the theoretical landscape of personalized learning and provides a systematic analytical tool for matching intelligent educational technologies with individual student characteristics. Furthermore, this research opens new theoretical perspectives for intelligent laboratory instruction, enhancing the understanding of AI’s role in facilitating tailored education.
1.2.2 Practical Value
As smart laboratories emerge as a key trend in modern education, the specific application value of intelligent tools like DeepSeek remains to be substantiated. Through experimental data analysis, this study evaluates DeepSeek’s adaptability and proposes a “Three-Dimensional Adaptability Model” (cognitive, affective, behavioral), offering actionable reference standards for selecting and optimizing intelligent learning tools in smart laboratories. This model not only elevates the personalization of laboratory instruction but also propels smart laboratories toward greater efficiency and intelligence, providing practical guidance for educators.
1.3 Research Objectives and Questions
This study aims to experimentally validate DeepSeek’s adaptability in chemistry laboratory learning, explore its impact on the learning outcomes of students with different learning styles, and construct a theoretical and practical framework for personalized laboratory instruction. To achieve this objective, the study focuses on the following key questions:
1. Can DeepSeek, as an intelligent learning assistant, effectively adapt to students with varying learning styles and significantly enhance their chemistry laboratory learning outcomes?
2. Under DeepSeek’s intervention, do students with different learning styles exhibit significant differences in performance metrics such as experimental scores and operational accuracy?
3. How can the relationship between DeepSeek’s adaptability and students’ learning outcomes be quantitatively assessed?
By addressing these questions, this study seeks to bridge the gap in research on AI adaptability in chemistry laboratory learning, offering theoretical support and practical insights for the further development of educational technology.
2 Literature Review
2.1 Learning Style Theories and Chemistry Laboratory Learning
Learning style theories posit that individuals exhibit distinct preferences in how they receive, process, and apply information, directly influencing the quality of their learning outcomes. The VARK model (Fleming & Mills, 1992), a widely recognized classification framework, categorizes learning styles into four types: Visual (preferring images, charts, and animations), Aural (relying on lectures and discussions), Read/Write (favoring text reading and note-taking), and Kinesthetic (emphasizing practical operation and tactile experience). This model has been extensively applied in educational research. Visual learners tend to comprehend knowledge through images, charts, and animations (Mayer, 2009); aural learners demonstrate higher efficiency when listening to explanations or engaging in discussions (Rogowsky et al., 2015); read/write learners prefer textual materials and note-taking (Pashler et al., 2008); and kinesthetic learners achieve optimal results through hands-on practice and physical interaction (Barbe & Swassing, 1979).
In the context of chemistry laboratory learning, the differences in learning styles are particularly pronounced. Chemistry experiments encompass abstract concepts (e.g., molecular structures), dynamic processes (e.g., chemical reactions), and concrete operations (e.g., instrument use), placing diverse demands on students’ cognitive processing. For instance, Reid (2005) notes that visual learners more readily form intuitive impressions by observing experimental phenomena, while kinesthetic learners enhance their mastery of skills through repeated hands-on practice. However, traditional chemistry lab instruction often relies on uniform demonstrations and written instructions, failing to adequately address these individual differences. Research by Hawk and Shah (2007) suggests that instructional designs neglecting learning styles may lead to reduced student engagement, inadequate skill acquisition, and even diminished interest in science. Consequently, optimizing chemistry laboratory instruction to accommodate diverse learning styles has emerged as a critical topic in educational research.
2.2 AI in Personalized Learning
2.2.1 Functions and Advantages of AI Intelligent Learning Assistants
With the rapid advancement of artificial intelligence (AI) technologies, personalized learning has encountered new opportunities. Large language models (e.g., GPT-4, DeepSeek) and adaptive learning systems are increasingly applied in education. Zhai and Feng (2023) highlight that AI systems like DeepSeek, leveraging natural language processing (NLP) and data analytics, can accurately identify students’ learning needs and provide tailored feedback. For instance, DeepSeek can generate specific explanations based on students’ questions or adjust content difficulty according to their learning progress. This adaptive capability positions it as a dedicated tutor, dynamically addressing students’ cognitive demands. A meta-analysis by Kulik and Fletcher (2016) further confirms that AI-assisted instruction outperforms traditional methods by approximately 0.5 standard deviations, with particularly notable effectiveness in scenarios requiring immediate feedback.
2.2.2 AI Practices in STEM Education
AI’s application is especially prominent in STEM (Science, Technology, Engineering, and Mathematics) education. Luckin et al. (2021) demonstrate that AI-driven virtual lab platforms (e.g.,Labster) simulate real experimental environments, offering students convenient remote learning experiences. By integrating augmented reality (AR) and virtual reality (VR) technologies, virtual labs overcome the time and resource constraints of traditional laboratories while allowing repeated practice. Merchant et al. (2014) found in a meta-analysis that virtual labs significantly enhance experimental skills (effect size d = 0.58) compared to traditional instruction. Additionally, AI optimizes learning paths through behavioral data analysis (e.g., click frequency, dwell time), further improving learning efficiency.
However, limitations persist in AI’s application within STEM education. Existing research predominantly focuses on knowledge delivery and virtual platform development, with less attention to providing differentiated support based on learning styles. For example, Chen et al. (2022) note that current virtual lab systems primarily emphasize visual presentation, inadequately addressing the needs of aural and read/write learners. Moreover, the contextual demands of lab learning (e.g., authentic operational experience) remain challenging for AI to fully replicate, causing difficulties for some students in real-world experiments. These issues underscore a research gap in AI’s role in personalized laboratory instruction.
2.3 Innovation of This Study
A review of existing literature reveals that AI applications in personalized learning primarily concentrate on content recommendation and learning path optimization. For instance, Huang et al. (2021) developed an AI-based chemistry knowledge recommendation system but did not address learning style adaptability. In contrast, systematic studies targeting different learning styles in chemistry laboratory learning are scarce, particularly those integrating the VARK model with AI adaptability, which remain largely unexplored. Furthermore, while behavioral data analysis has been employed to assess learning outcomes (e.g., Zhang & Liu, 2023), there is a lack of quantitative evaluation frameworks for AI adaptability, hindering educators’ ability to scientifically measure its instructional impact.
This study bridges this gap by pioneering the application of DeepSeek to personalized adaptability in chemistry lab learning. It designs targeted strategies based on the VARK model and proposes an “AI-Learning Style” adaptability evaluation framework. Unlike prior studies, this research not only focuses on technical implementation but also validates adaptability through experimental data, constructing a systematic framework that integrates behavioral analysis and subjective evaluation. This innovation offers new perspectives and methodologies for the deeper application of AI in personalized education, laying a theoretical and practical foundation for future research.
3 Methodology
3.1 Research Design
This study employs a quasi-experimental design based on the VARK learning style model, utilizing a pre-test-intervention-post-test approach to systematically evaluate the adaptability of DeepSeek AI in chemistry laboratory learning. The VARK model (Fleming & Mills, 1992) categorizes students into four types—Visual, Aural, Read/Write, and Kinesthetic—providing a theoretical foundation for grouping. The experimental process consists of three phases:
– Pre-test Phase: All participants completed the VARK learning style questionnaire (Fleming, 2001) to determine their learning style preferences, followed by a baseline chemistry lab test. The test included understanding of experimental principles (multiple-choice questions, 20 points), operational standardization (rating scale, 30 points), and result analysis (open-ended questions, 50 points), totaling 100 points, aimed at assessing students’ initial competency levels.
– Intervention Phase: Based on VARK grouping results, students underwent a 6-week DeepSeek AI-assisted learning program, with two 60-minute lab sessions per week. DeepSeek provided personalized support tailored to each learning style (see Section 3.3 for specific strategies). The 6-week duration was chosen to allow sufficient time for students to adapt to the AI system and demonstrate learning outcomes while avoiding excessive fatigue.
– Post-test Phase: After the intervention, students completed the same lab test as in the pre-test phase. Pre- and post-test data were compared to evaluate changes in learning performance and operational accuracy. Additionally, behavioral data and questionnaire surveys were collected to comprehensively analyze DeepSeek’s adaptability effects.
To control for external variables, the experiment was conducted in a standardized laboratory environment, with consistent teacher guidance and textbook content, isolating DeepSeek’s adaptability strategies as the sole independent variable.
3.2 Research Participants
The study recruited 120 second-year high school students from two public high schools in a certain city, all of whom were participants in a non-major chemistry elective course, ensuring comparability in initial experimental skills. A stratified random sampling method was used, with participants allocated proportionally by school and class to minimize significant biases in gender or academic background. The final sample comprised 52% females and 48% males, with an average age of 16.8 years (SD = 0.5).
Based on the VARK questionnaire results from the pre-test phase, students were divided into four groups: Visual (31 students), Aural (28 students), Read/Write (30 students), and Kinesthetic (31 students), with group sizes nearly balanced. Post-grouping, a one-way analysis of variance (ANOVA) confirmed no significant differences in pre-test scores across the four groups (F = 1.23, p = 0.31), ensuring fairness at the experimental baseline.
3.3 AI Adaptation Strategies
DeepSeek AI integrates natural language processing (NLP) and computer vision (CV) technologies to provide customized support for students with different learning styles. The specific strategies are as follows:
Visual (V): Offers 3D animated experiment demonstrations, dynamically illustrating chemical reaction processes (e.g., color changes in acid-base titrations) and molecular structure transformations (e.g., the spatial configuration of H2O). Animations are generated by DeepSeek’s visualization module at a frame rate of 30 fps, ensuring smoothness and realism.
– Aural (A): Equipped with an AI voice assistant that delivers step-by-step explanations of experimental procedures in natural speech (e.g., “Slowly add 10 mL of NaOH to the beaker”) and reinforces principles (e.g., “Neutralization reactions produce salt and water”). The speech rate is adjustable (default: 120 words per minute) and supports interactive student queries.
– Read/Write (R): Provides structured textual guidance, including experiment background, steps, and precautions (e.g., “Avoid reagent spillage”), along with personalized note generation. Interactive quizzes (10 multiple-choice questions) adjust difficulty in real-time based on student responses, reinforcing knowledge consolidation.
– Kinesthetic (K): Offers a virtual simulation lab platform where students can operate instruments (e.g., pipettes, burettes) in a virtual environment, with the system providing real-time feedback on operational accuracy (e.g., “Titration speed too fast”). Simulation accuracy reaches 95%, closely mimicking real lab experiences.
Each strategy is dynamically adjusted by DeepSeek based on students’ pre-test performance and learning styles, ensuring targeted adaptability.
3.4 Data Collection
This study collects four types of data to comprehensively evaluate DeepSeek’s adaptability effects:
– Learning Performance: Assessed by comparing pre- and post-test scores (total: 100 points) from the experimental tests, evaluating improvements in knowledge and skills. Tests were independently scored by two chemistry teachers, achieving high inter-rater consistency (Cohen’s κ = 0.87).
– Operational Accuracy: Recorded students’ operational steps in the post-test (e.g., measuring reagents, recording data), quantified using a standardized scoring rubric (10 items, 10 points each), with error rates expressed as percentages. The process was video-monitored to ensure data objectivity.
– Behavioral Data: Automatically logged by the DeepSeek system, including dwell time on 3D animations (minutes), number of interactions in virtual labs, and total learning path duration (minutes). Data was sampled every 5 seconds and stored in a cloud database.
– Subjective Adaptability: Measured via a self-designed questionnaire (15 items, 5-point Likert scale) assessing learning experience, cognitive load, and satisfaction. The questionnaire, adapted from Hawk and Shah (2007), was pre-tested for reliability (Cronbach α = 0.89) and validity (factor analysis KMO = 0.82).
3.5 Data Analysis
To ensure scientific rigor, the following statistical methods were employed:
– Paired t-tests: Compared pre- and post-test scores and operational accuracy within each group to examine the individual effects of DeepSeek’s intervention (significance level α = 0.05).
– One-Way Analysis of Variance (ANOVA): Analyzed inter-group differences in post-test scores and operational accuracy across the four groups to verify the influence of learning styles (post-hoc tests used Bonferroni correction).
– Correlation Analysis: Employed Pearson correlation coefficients to explore the relationship between subjective adaptability scores and learning performance improvements, quantifying the association of adaptability effects.
Data processing was conducted using SPSSAU, with outliers removed via the boxplot method to ensure result robustness.
4 Results and Analysis
4.1 Learning Effectiveness Analysis
4.1.1 Changes in Learning Performance
Table 1 presents the changes in learning performance for the four student groups before and after the DeepSeek intervention (total score: 100 points).
Table 1: Pre- and Post-Test Learning Performance Comparison
Learning Style | Pre-Test Score (Mean ± SD) | Post-Test Score (Mean ±SD) | Improvement (%) | Paired t-Test (p-value) |
Visual (V) | 65.4 ± 6.8 | 82.1 ± 5.9 | +40.6% | p < 0.01 (significant) |
Aural (A) | 63.8 ± 7.2 | 78.4 ± 6.3 | +29.3% | p = 0.041 (significant) |
Read/Write (R) | 67.1 ± 6.5 | 79.2 ± 5.8 | +21.7% | p=0.072(not significant) |
Kinesthetic (K) | 61.5 ± 7.4 | 85.3 ± 6.1 | +38.7% | p < 0.01 (significant) |
Paired t-test results indicate that the performance improvements for Visual (t = 5.82, p < 0.01) and Kinesthetic (t = 6.14, p < 0.01) students were highly significant, with gains of 40.6% and 38.7%, respectively. This suggests that DeepSeek’s 3D animation and virtual simulation strategies effectively enhanced knowledge mastery for these groups. Aural students showed a 29.3% improvement (t = 2.13, p = 0.041), which, while statistically significant, was less pronounced than that of Visual and Kinesthetic students, possibly due to the passive nature of voice-guided explanations limiting active learning. Read/Write students exhibited the smallest improvement (21.7%, t = 1.89, p = 0.072), failing to reach significance, reflecting a lower alignment with the predominantly visual and interactive adaptation strategies. ANOVA analysis (F = 5.62, p = 0.003) further confirmed significant differences in performance gains across the four groups, with Visual and Kinesthetic students outperforming the others.
4.1.2 Operational Accuracy
Table 2 presents the changes in students’ operational accuracy in experiments (full score: 100%).
Table 2: Pre- and Post-Test Operational Accuracy Comparison
Learning Style | Pre-Test Accuracy (%) | Post-Test Accuracy (%) | Improvement (%) | Paired t-Test (p-value) |
Visual (V) | 72.3 ± 7.1 | 85.6 ± 6.2 | +18.5% | p = 0.015 (significant) |
Aural (A) | 70.1 ± 6.9 | 81.2 ± 6.5 | +15.9% | p = 0.054 (not significant) |
Read/Write (R) | 74.5 ± 7.3 | 83.0 ± 6.4 | +11.4% | p = 0.089 (not significant) |
Kinesthetic (K) | 68.2 ± 6.7 | 90.1 ± 5.8 | +32.1% | p < 0.01 (significant) |
Kinesthetic students exhibited the greatest improvement in operational accuracy (32.1%, t = 5.97, p < 0.01), benefiting from the repeated practice opportunities provided by virtual simulation experiments. Visual students also showed significant improvement (18.5%, t = 2.58, p = 0.015), likely due to 3D animations aiding in a more precise understanding of operational steps. Improvements for Aural (15.9%, t = 2.01, p = 0.054) and Read/Write (11.4%, t = 1.76, p = 0.089) students did not reach statistical significance, suggesting that voice and text-based strategies have limited direct impact on skill training. ANOVA results (F = 4.97, p = 0.007) indicate significant inter-group differences, with Kinesthetic students standing out prominently.
4.2 Behavioral Data Analysis
Table 3 summarizes the behavioral data of students within the DeepSeek system (mean ± standard deviation).
Table 3: Learning Behavior Data
Learning Style | 3D Animation Dwell Time (min) | Diagram Dwell Time (min) | Virtual Lab Interactions | Total Learning Path Duration (min) |
Visual (V) | 23.4 ± 4.8 | 18.7 ± 5.1 | 8.2 ± 3.5 | 62.3 ± 9.4 |
Aural (A) | 12.5 ± 3.9 | 9.8 ± 4.2 | 5.4 ± 2.9 | 47.8 ± 8.1 |
Read/Write (R) | 11.3 ± 3.5 | 10.1 ± 4.0 | 4.9 ± 2.5 | 45.6 ± 7.6 |
Kinesthetic (K) | 9.2 ± 3.1 | 7.8 ± 3.7 | 21.5 ± 5.2 | 58.4 ± 8.7 |
Visual students spent the longest time on 3D animations (23.4 minutes) and diagrams (18.7 minutes), reflecting their reliance on visual content. Kinesthetic students recorded the highest number of interactions in virtual labs (21.5 times), indicating a preference for learning through operation. Behavioral data for Aural and Read/Write students were more dispersed, with lower dwell times and interaction counts, possibly due to existing strategies failing to fully engage them. ANOVA analysis revealed significant inter-group differences in interaction frequency (F = 6.83, p < 0.001) and total duration (F=4.12, p=0.009), underscoring the influence of learning styles on behavioral patterns.
4.3 Subjective Adaptability Assessment
Table 4 presents students’ subjective ratings of DeepSeek (on a scale of 10).
Table 4: Subjective Adaptability Ratings
Learning Style | Learning Experience (/10) | Cognitive Load Reduction (/10) | Learning Efficiency (/10) | Satisfaction (/10) |
Visual (V) | 8.7 ± 0.9 | 7.8 ± 1.2 | 8.5 ± 1.1 | 8.9 ± 0.8 |
Aural (A) | 7.2 ± 1.1 | 6.5 ± 1.3 | 7.0 ± 1.2 | 7.4 ± 1.0 |
Read/Write (R) | 6.8 ± 1.3 | 6.2 ± 1.4 | 6.9 ± 1.3 | 7.1 ± 1.1 |
Kinesthetic (K) | 9.1 ± 0.8 | 8.5 ± 1.0 | 8.9 ± 0.9 | 9.3 ± 0.7 |
Kinesthetic students provided the highest subjective ratings (satisfaction: 9.3), reflecting the strong alignment of virtual experiments with their preferences. Visual students followed closely (satisfaction: 8.9), with 3D animations enhancing their learning experience. Aural (7.4) and Read/Write (7.1) students gave lower ratings, possibly due to insufficient interactivity in voice and text support. ANOVA results (F = 5.87, p = 0.004) indicate significant differences in satisfaction across groups. Correlation analysis revealed a positive relationship between satisfaction and performance improvement (r = 0.62, p < 0.01), confirming the impact of adaptability on learning outcomes.
5 Discussion
5.1 Effectiveness of AI Adaptation Strategies
The results of this study indicate that DeepSeek AI exhibits significant differences in adaptability effects across students with varying learning styles in chemistry laboratory learning. Visual and Kinesthetic students benefited the most, with learning performance improvements of 40.6% and 38.7%, respectively, and operational accuracy gains of 18.5% and 32.1%, both reaching statistical significance (p < 0.05). These findings align with Mayer’s (2009) multimedia learning theory, which posits that visualized content (e.g., 3D animations) reduces cognitive load through intuitive presentation, enabling Visual learners to more efficiently comprehend experimental principles. The standout performance of Kinesthetic students stems from the high interactivity of virtual simulation experiments, consistent with Barbe and Swassing’s (1979) assertion that kinesthetic learners significantly enhance memory and skill mastery through hands-on practice. The real-time feedback and repeated practice opportunities provided by virtual labs further amplified this effect.
In contrast, Aural and Read/Write students showed weaker improvements. Aural students achieved a 29.3% increase in learning performance (p = 0.041) and a 15.9% gain in operational accuracy (p = 0.054), with partial significance but limited overall impact. This may be attributed to the predominantly passive nature of voice-guided explanations, lacking the discussion-based interaction emphasized by Rogowsky et al. (2015), which restricted active exploration. Read/Write students recorded the smallest gains, with a 21.7% improvement in performance (p = 0.072) and an 11.4% increase in accuracy (p = 0.089), neither reaching significance. Pashler et al. (2008) note that Read/Write learners prefer deep reading and self-directed reasoning, yet DeepSeek’s text support, primarily structured explanations, failed to fully meet their need for autonomous processing. Additionally, the current strategies’ emphasis on visual and interactive elements may have diminished adaptability for Aural and Read/Write learners.
5.2 Significance of the Adaptability Evaluation Framework
The “AI-Learning Style” adaptability evaluation framework proposed in this study systematically quantifies the alignment between DeepSeek and VARK learning styles through multidimensional indicators, including learning performance, operational accuracy, behavioral data, and subjective ratings. Its theoretical significance lies in addressing a gap in AI-driven personalized education research. Compared to traditional single-metric evaluations based solely on performance (e.g., Kulik & Fletcher, 2016), this framework integrates behavioral analysis (e.g., interaction frequency) and subjective experiences (e.g., satisfaction), offering a more comprehensive reflection of the learning process’s complexity. Correlation analysis (r = 0.62, p < 0.01) further validates the positive relationship between adaptability and learning outcomes, providing a replicable quantitative framework for future studies.
In practical terms, this framework offers optimization directions for educators and technology developers. For instance, the high adaptability for Visual and Kinesthetic learners highlights the potential of multimodal presentations (e.g., animations, simulations), while the lower ratings for Aural and Read/Write learners suggest a need for enhanced interactivity and content depth. This framework can guide smart laboratories in selecting adaptive tools and provide data-driven support for iterating DeepSeek’s functionalities, advancing the precise implementation of personalized instruction.
5.3 Limitations of the Study
Despite its significant findings, this study has the following limitations:
– Sample Limitation: The research participants were limited to 120 second-year high school students from two urban high schools, restricting the geographic and age representativeness of the sample. The generalizability of the conclusions requires further validation.
– Intervention Duration: The 6-week intervention period may be insufficient to fully capture DeepSeek’s long-term adaptability effects across learning styles, particularly for Read/Write learners, whose habit changes might demand more time.
– Strategy Singularity: Current adaptation strategies rely on single modalities (e.g., animations for Visual learners), without fully exploring the synergistic effects of multimodal combinations (e.g., animation + voice), potentially limiting support for Aural and Read/Write learners.
– External Variables: Although teacher guidance and教材 were controlled, students’ prior chemistry knowledge and learning motivation may still have exerted unaccounted influences on the results.
These limitations suggest that future research should expand the sample scope, extend the intervention duration, and refine adaptation strategies to enhance comprehensiveness.
6 Conclusion and Future Directions
6.1 Research Conclusion
This study, through a pre-test-intervention-post-test experimental design, validated the adaptability of DeepSeek AI in chemistry laboratory learning and its effects on students with VARK learning styles. The results demonstrate that DeepSeek exhibited significant adaptability for Visual and Kinesthetic learners, with learning performance improvements of 40.6% and 38.7%, operational accuracy gains of 18.5% and 32.1% (p < 0.05), and subjective satisfaction ratings reaching 8.9 and 9.3 (out of 10), respectively. These outcomes are attributed to the intuitive presentation of 3D animations and the high interactivity of virtual simulation experiments, which precisely aligned with the learning preferences of these two groups. However, for Aural and Read/Write learners, DeepSeek’s effects were weaker, with performance improvements of 29.3% and 21.7%, accuracy gains of 15.9% and 11.4%—some of which did not reach statistical significance (p > 0.05)—and satisfaction scores of only 7.4 and 7.1. This reflects deficiencies in the interactivity and depth of voice explanations and text support, necessitating further optimization.
Theoretically, the “AI-Learning Style” adaptability evaluation framework developed in this study quantifies the alignment between AI and learning styles through multidimensional indicators (performance, skills, behavior, subjective experience), filling a gap in personalized education assessment and providing a systematic framework for intelligent instruction research. Practically, the proposed “Three-Dimensional Adaptability Model” (cognitive, affective, behavioral) offers a scientific basis for selecting and refining intelligent tools in smart laboratories, advancing chemistry lab instruction toward personalization and intelligence.
6.2 Future Directions
Based on the findings and limitations, future research can advance in the following directions:
– Optimizing Adaptation Strategies: To address the shortcomings for Aural and Read/Write learners, integrating multimodal fusion technologies is recommended. For instance, developing interactive voice Q&A modules for Aural learners could enhance active engagement, while providing editable lab report templates for Read/Write learners could support deeper reasoning. Additionally, combining augmented reality (AR) and virtual reality (VR) technologies to create cross-modal learning environments (e.g., animation + voice + operation) could improve adaptability across all learning styles.
– Expanding Research Scope: Limited to high school students, this study could extend to junior high, university, or vocational education populations to verify DeepSeek’s applicability across age groups and disciplines (e.g., physics, biology). Increasing sample size and geographic diversity (e.g., rural vs. urban schools) would enhance the generalizability of the results.
– Practical Application Guidance: Translating findings into practice, teachers are advised to flexibly combine DeepSeek modules with traditional methods based on students’ VARK profiles—e.g., supplementing paper-based materials for Read/Write learners or adding real lab opportunities for Kinesthetic learners. Furthermore, developing teacher training programs to guide effective integration of DeepSeek in smart laboratories could boost instructional efficiency.
Through these enhancements, DeepSeek could achieve more comprehensive personalized support, driving chemistry lab education toward greater precision and intelligence, and contributing further theoretical and practical value to modern educational technology development.
References
[1]Barbe, W. B., & Swassing, R. H. (1979). Teaching through modality strengths: Concepts and practices. Columbus, OH: Zaner-Bloser.
[2]Chen, X., Zou, D., & Xie, H. (2022). Fifty years of virtual reality in education: A meta-analysis of the impact on student learning outcomes. Educational Research Review, 35, 100432.
[3]Fleming, N. D. (2001). Teaching and learning styles: VARK strategies. Christchurch, New Zealand: N.D. Fleming.
[4]Fleming, N. D., & Mills, C. (1992). Not another inventory, rather a catalyst for reflection. To Improve the Academy, 11(1), 137-155.
[5]Hawk, T. F., & Shah, A. J. (2007). Using learning style instruments to enhance student learning. Decision Sciences Journal of Innovative Education, 5(1), 1-19.
[6]Huang, T., Zhang, J., & Liu, Y. (2021). Personalized learning recommendation system based on deep learning for chemistry education. Journal of Educational Technology Development and Exchange, 14(2), 45-60.
[7]Kulik, J. A., & Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: A meta-analytic review. Review of Educational Research, 86(1), 42-78.
[8]Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2021). Intelligence unleashed: An argument for AI in education. London: Pearson Education.
[9]Mayer, R. E. (2009). Multimedia learning (2nd ed.). Cambridge, UK: Cambridge University Press.
[10]Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., & Davis, T. J. (2014). Effectiveness of virtual reality-based instruction on students’ learning outcomes in K-12 and higher education: A meta-analysis. Computers & Education, 70, 29-40.
[11]Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105-119.
[12]Reid, J. M. (2005). Learning styles in the ESL/EFL classroom. TESOL Quarterly, 39(2), 345-348.
[13]Rogowsky, B. A., Calhoun, B. M., & Tallal, P. (2015). Matching learning style to instructional method: Effects on comprehension. Journal of Educational Psychology, 107(1), 64-78.
[14]Zhai, X., & Feng, L. (2023). The role of large language models in personalized education: Opportunities and challenges. Educational Technology & Society, 26(1), 89-102.
[15]Zhang, L., & Liu, Q. (2023). Behavioral data analytics in adaptive learning systems: A review of current trends and future directions. Journal of Computer Assisted Learning, 39(4), 1123-1138.