Part of the
4TU.
Centre for
Engineering Education
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Centre for
Engineering Education
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

Project introduction and background information

In its Strategy 2030 document (Executive Board TU/e, 2018: 31), TU/e stresses the importance of digitization, to allow learning at any place, at any time and to support adaptive and personalized instruction and feedback. Intelligent systems could be used to fulfil this need, by providing personalized, automated and timely feedback.

The power, sophistication, and societal prominence of generative AI systems such as ChatGPT has only grown. As is evidenced by discussions in the academic, professional, and popular media, these systems pose an unprecedented challenge to established structures and institutions in many different domains, including higher education. The particular problem facing higher education is that generative AI undermines the principle of constructive alignment between learning objectives and assessment methods, impacting students and teachers alike.

For students, the advent of generative AI is poised to change the skills they require to succeed after leaving university and entering the workforce. Given their general-purpose nature, systems such as ChatGPT are likely to be used in many different domains, from software engineering to journalism and marketing. Graduating students entering these domains must possess relevant skills, which are likely to differ from the ones higher education is traditionally designed to promote. In particular, when performing writing tasks such as conducting literature review or compiling reports, students will no longer need to do so on their own, but will instead be expected to collaborate with AI technology to enhance the speed and quality of their writing. Among others, this will require mastery of skills such as prompt-engineering and machine summarizing, as well as a critical engagement with AI-generated content. Higher education generally, and TU/e specifically, should equip students with these skills, and will therefore need to identify and articulate “future-oriented” learning objectives to be achieved in writing-based university courses.

For teachers, the increased power and availability of generative AI challenges their ability to assess student learning. Because tools such as ChatGPT can be used to produce deliverables such as essays, reports, diagrams, and code, the provenance of these deliverables can no longer be traced to individual students as opposed to sophisticated machines. As a consequence, it is unclear that students are actually satisfying the stated learning objectives, and teachers will require “AI-proof” assessment methods that allow them to measure the extent to which students have mastered the ability to write clearly and effectively, either in collaboration with relevant AI systems, or on their own.

If TU/e is to stay at the forefront of societally relevant engineering education, it will have to promote the implementation of new education methods to promote effective writing. Crucially, this action will have to occur sooner rather than later: as generative AI systems become even more powerful, available, and easy to use, it is in the university’s interest to stay ahead of developments and be proactive rather than reactive.

Objective and expected outcomes

The main objectives and expected outcomes of the project are manifold, consisting of the following working packages:

WP1: Mapping the literature on future-oriented and GenAI compatible higher education

This work package will study the impact of Generative AI (GenAI) on aligning learning objectives, activities, and assessments in higher education. The main focus will be on identifying future-oriented learning goals and GenAI-compatible pedagogical and assessment methods based on literature searches. 

Expected outcomes:

  • Report 1a: Report summarizing key insights about future-oriented learning objectives, and lists of learning activities and assessment methods useful to accommodate GenAI in education (Dec 2024). Closely related, research insights related to this WP will be presented at the 4TUCEE End of Year event (November 2024).
  • Report 1b: An updated version of report 1a incorporating any relevant updates in light of AI technology advancements (June 2025).


WP2: Designing a framework for learning assessment through AI interaction analysis

This work package will study interactions between students and GenAI chatbots like ChatGPT and their relation with  learning outcomes. Collaborating with course teachers, education experts, and learning analytics specialists, we will develop a framework to analyze these interactions (e.g., logs) for multiple courses at TU/e that involve writing assignments. The focus will be on identifying patterns that indicate learning progress, GenAI literacy, critical thinking, and knowledge construction.

This analysis framework includes:

  • Collecting student-GenAI interaction logs (user prompts and chatbot responses).
  • Applying qualitative coding schemes for dialogue content to assess patterns of interaction, including critical thinking evidence, question types, and iteration patterns.
  • Developing quantitative metrics for interaction analysis (e.g., rubric scores, input frequency). Conducting rubric-based assessments comparing traditional evaluations with GenAI-interaction assessments.
  • Automating chat log analysis using natural language processing to identify learning patterns.
  • Surveying student characteristics and perceptions through open-ended questions about their use of GenAI tools in learning.
  • Collaborating with students to critically assess and optimize evaluation criteria for student-AI interactions (in alignment with TU/e BOOST objectives)

This working package will take place in AY 2024-2025 across various Bachelor’s and Master’s courses from different departments.  

Expected outcomes:

  • Report 2a: A description of the methodology and framework for assessing learning outcomes through AI interaction log analysis as well as the outcomes from a set of pilot courses (April 2025). Closely related, research insights related to this WP will be presented at the 4TUCEE End of Year event (November 2024).
  • Report 2b: Tutorial for teachers on practical implementations of the assessment framework.(April 2025)

 

WP3: Develop pedagogical activities powered by Generative AI

Based on the insights obtained from the literature review in WP1, as well as the insights gained from the development of the AI compatible assessment method from WP2, we aim to develop pedagogical activities to improve constructive alignment by teaching students how to effectively use AI to improve their performance on student-GenAI interaction assessments. The design of these GenAI tutors will be informed by insights gained in the previous work packages, and tailored to the learning objectives of the course and the specificities of the course assignments. These tools will be tested in pilot studies, in at least two courses with different types of assignments (e.g., argumentative essay vs. statistical programming and data analysis). For example:

  • In a TU/e statistics course like “Advanced research methods and research ethics” where the objective for students is to learn how to think about their analytical approach to a given problem, the GenAI tutor can be designed to steer the student towards the assignment solution / desirable outcomes through questions posed to the student (scaffolding), as opposed to directly providing the solution. Simultaneously, a second GenAI tutor available to the same student could assist with another task required by the assignment, such as statistical programming, by providing explanations of the code or helping with code debugging. The behavior of the AI chatbot can be configured through prompt engineering techniques such as “Chain-of-Thought” combined with “Flipped interaction” (White et al., 2023).
  • For writing-oriented courses such as “Data Science Ethics”, a GenAI tutor tool can be designed to assist with academic writing through scaffolding, such as guiding essay outlines, prompting critical thinking in the student, and offering instant feedback on their writing in a way that is aligned with the course learning objectives (embedded in the tutor’s underlying model or knowledge context). Likewise, the behavior of the AI chatbot is configured through prompt engineering techniques such as “Chain-of-Thought” combined with “Flipped interaction” (White et al., 2023).

By designing these GenAI tutor prototypes and incorporating them as tools in a course we will investigate:

  • Whether using GenAI tutor tools lead to higher grades in student-GenAI interaction assessments, thereby providing insight into quality of constructive alignment in the course
  • The impact of using these tools on self-perceptions of skill mastery (e.g., self-efficacy), thereby providing insights on alignment with students’ psychological
  •  Students experiences using these tools (using open-ended questions in a post-assignment survey)

Expected outcomes:

  • Report 3a: Tutorial on how to design a GenAI tutor and tailor it to a specific use case (March 2025)
  • Report 3b: Report describing the results of the GenAI tutor studies (Nov 2025).


WP4: Workshops on GenAI tools for educational activities

AI literacy is a crucial step towards the responsible use of GenAI technology in educational practices (Kasneci et al., 2023; Redecker, 2017). In this work package, we will develop a workshop aimed at either or both students (ranging from Bachelor to PhD candidates) and staff where we teach how to properly design and implement a GenAI based assistants using the most relevant and/or accessible AI chatbot tool(s) at the time they take place. The workshop will focus on teaching the essential steps to configure the behavior of AI chatbot assistants through existing techniques (e.g. prompt engineering, fine-tuning options), and provide examples on how to tailor it to more specific use cases (e.g., drafting teaching materials, academic writing assistance, literature summarization, data analysis assistance). A second type of workshop will place a higher weight on practical tips to employ GenAI tools in academic writing activities. A third type of workshop, aimed for the ALT community (Academy for Learning and Teaching, TU/e) will focus on topics of AI literacy. Content (tentative) may encapsulate and chain the following topics: evidence-based utility of GenAI tools, building GenAI chatbots to augment teaching practice, ethical use of GenAI, plus any practical recommendations derived from insights from pilot studies. Dates and times of these workshops will be arranged with project members and project coordinators from either 4TUCEE (SEFI) and BOOST (ALT).

Expected outcomes:

  • Ongoing workshops at TU/e event with focus on GenAI chatbot design for educational activities and where possible workshop at a scientific event (e.g., SEFI) (Dec 2024-Dec 2025).

References

  • Kasneci, E., Sessler, K., KĂŒchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., GĂŒnnemann, S., HĂŒllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., 
 Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
  • Redecker, C. (2017). European framework for the digital competence of educators: DigCompEdu. In Y. Punie (Ed.), Technical report. Joint Research Centre (Seville site). https://data.europa.eu/doi/10.2760/159770
  • White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (arXiv:2302.11382). arXiv. https://doi.org/10.48550/arXiv.2302.1138

Results and learnings

WP1: Mapping the literature on future-oriented and GenAI compatible higher education

Context

Generative AI (GenAI) technologies like ChatGPT and similar chatbots are transforming higher education. As these tools become more sophisticated, they pose significant challenges and opportunities for teaching, learning, and assessment. In this summary, we provide an overview of the key insights from our more extensive report (available below), highlighting how our ongoing review of the state-of-the-art knowledge on the challenges and opportunities of GenAI in Higher Education can be translated into actionable information relevant to teachers and students, highlighting strategic directions, future research, and practical recommendations. The emphasis of this summary is primarily on practical recommendations, given the urgency to respond to the ongoing transformative impact of GenAI.

Key challenges identified

  • Disruption of traditional learning and assessment: GenAI’s humanlike content generation challenges conventional teaching methods, assessment integrity, and the unique value of human instruction (Kolade et al., 2024; Rathi et al., 2024).
  • Need for skill reorientation: Both educators and students must shift focus toward skills that AI cannot easily replicate—critical thinking, creativity, ethical reasoning, adaptability, and AI literacy (Bower et al., 2024; Chauncey & McKenna, 2024; Kolade et al., 2024).
  • Assessment uncertainty: With AI’s capacity to generate content, verifying authorship and evaluating genuine understanding become more complex, necessitating a redesign of assessment methodologies (Fleckenstein et al., 2024; Jakesch et al., 2019).

Strategic responses and recommendations

Redefining learning objectives

  • Emphasize higher-order skills: critical thinking, problem-solving, creative ideation, ethical reasoning, and adaptability (Kasneci et al., 2023; Zhai, 2022).
  • Integrate AI literacy: Ensure students and teachers understand AI’s capabilities, limitations, biases, and ethical considerations (Bower et al., 2024; Chiu, 2024; Kolade et al., 2024).

Transforming assessment practices

  • Move from product-focused to process-oriented assessment: Evaluate reasoning processes, metacognitive skills, and real-world application (Cheng et al., 2024; Kolade et al., 2024).
  • Adopt diverse, authentic evaluation methods: Use live presentations, peer assessments, project-based assignments, and frequent low-stakes assessments to mitigate AI’s advantages in generating generic responses (Kolade et al., 2024; Xia et al., 2024).
  • Leverage prompt analytics: Analyze student interactions with AI to gain insights into learning processes and provide personalized feedback (Cheng et al., 2024; Kim et al., 2024).

Faculty development and policy updates

  • Invest in professional development to equip educators with skills for integrating AI responsibly into pedagogy and assessment (Chan & Tsi, 2024; Lim et al., 2023).
  • Update assessment policies to establish clear guidelines on AI use, maintaining academic integrity while embracing AI-enabled learning enhancements (Mollick & Mollick, 2022; Xia et al., 2024).

Ongoing research and pilot studies

  • Supporting and monitoring pilot projects (e.g., at TU/e) that assess the impact of GenAI on learning outcomes, teacher effectiveness, and student motivation.
  • Continuous multidisciplinary research to refine teaching and assessment strategies, strioving for an alignment with evolving (Gen)AI capabilities and future-oriented educational goals (e.g., Deng & Joshi, 2024; Mollick & Mollick, 2024; Rowland, 2023).

Value for teachers and students

For teachers:

  • Empowerment through professional development and clear guidelines on AI integration, enabling them to design engaging, authentic assessments that emphasize unique human skills.
  • Improved assessment tools and strategies that provide more accurate measures of student understanding and skill acquisition.

For students:

  • Development of relevant, future-oriented skills that enhance employability and adaptability in an AI-driven landscape (Chiu, 2024; Zhai, 2022).
  • Learning experiences that promote creativity, critical thinking, and ethical reasoning—areas where human judgment remains indispensable (Bower et al., 2024).

Conclusion and next steps

The integration of GenAI in higher education calls for a strategic, research-backed approach to curriculum design, assessment methods, and faculty development. Through focusing on uniquely human skills and transforming assessment practices, institutions can benefit from AI’s potential while preserving academic integrity and enhancing learning outcomes. Stakeholders are encouraged to support ongoing research, pilot projects, and policy updates that inform best practices. This proactive approach prepares both teachers and students for an increasingly AI-integrated educational environment.

References

  • Bower, M., Torrington, J., Lai, J. W. M., Petocz, P., & Alfano, M. (2024). How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence? Outcomes of the ChatGPT teacher survey. Education and Information Technologies, 29(12), 15403–15439. https://doi.org/10.1007/s10639-023-12405-0
  • Chan, C. K. Y., & Tsi, L. H. Y. (2024). Will generative AI replace teachers in higher education? A study of teacher and student perceptions. Studies in Educational Evaluation, 83, 101395. https://doi.org/10.1016/j.stueduc.2024.101395
  • Chauncey, S. A., & McKenna, H. P. (2024). Exploring the Potential of Cognitive Flexibility and Elaboration in Support of Curiosity, Interest, and Engagement in Designing AI-Rich Learning Spaces, Extensible to Urban Environments. In N. A. Streitz & S. Konomi (Eds.), Distributed, Ambient and Pervasive Interactions (pp. 209–230). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-60012-8_13
  • Cheng, Y., Lyons, K., Chen, G., GaĆĄević, D., & Swiecki, Z. (2024). Evidence-centered assessment for writing with generative AI. Proceedings of the 14th Learning Analytics and Knowledge Conference, 178–188. https://doi.org/10.1145/3636555.3636866
  • Chiu, T. K. F. (2024). Future research recommendations for transforming higher education with generative AI. Computers and Education: Artificial Intelligence, 6, 100197. https://doi.org/10.1016/j.caeai.2023.100197
  • Deng, X., & Joshi, K. D. (2024). Promoting ethical use of generative ai in education. SIGMIS Database, 55(3), 6–11. https://doi.org/10.1145/3685235.3685237
  • Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., & Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Computers and Education: Artificial Intelligence, 6, 100209. https://doi.org/10.1016/j.caeai.2024.100209
  • Jakesch, M., French, M., Ma, X., Hancock, J. T., & Naaman, M. (2019). AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3290605.3300469
  • Kasneci, E., Sessler, K., KĂŒchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., GĂŒnnemann, S., HĂŒllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., 
 Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
  • Kim, M., Kim, S., Lee, S., Yoon, Y., Myung, J., Yoo, H., Lim, H., Han, J., Kim, Y., Ahn, S.-Y., Kim, J., Oh, A., Hong, H., & Lee, T. Y. (2024). Designing Prompt Analytics Dashboards to Analyze Student-ChatGPT Interactions in EFL Writing (arXiv:2405.19691). arXiv. https://doi.org/10.48550/arXiv.2405.19691
  • Kolade, O., Owoseni, A., & Egbetokun, A. (2024). Is AI changing learning and assessment as we know it? Evidence from a ChatGPT experiment and a conceptual framework. Heliyon, 10(4), e25953. https://doi.org/10.1016/j.heliyon.2024.e25953
  • Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790. https://doi.org/10.1016/j.ijme.2023.100790
  • Mollick, E. R., & Mollick, L. (2022). New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments (SSRN Scholarly Paper 4300783). https://doi.org/10.2139/ssrn.4300783
  • Mollick, E. R., & Mollick, L. (2024). Instructors as innovators: A future-focused approach to new AI learning opportunities, with prompts (SSRN Scholarly Paper 4802463). https://doi.org/10.2139/ssrn.4802463
  • Rathi, I., Taylor, S., Bergen, B. K., & Jones, C. R. (2024). GPT-4 is judged more human than humans in displaced and inverted Turing tests (arXiv:2407.08853). arXiv. https://doi.org/10.48550/arXiv.2407.08853
  • Rowland, D. R. (2023). Two frameworks to guide discussions around levels of acceptable use of generative AI in student academic research and writing. Journal of Academic Language and Learning, 17(1), Article 1.
  • Xia, Q., Weng, X., Ouyang, F., Lin, T. J., & Chiu, T. K. F. (2024). A scoping review on how generative artificial intelligence transforms assessment in higher education. International Journal of Educational Technology in Higher Education, 21(1), 40. https://doi.org/10.1186/s41239-024-00468-z
  • Zhai, X. (2022). ChatGPT user experience: Implications for education. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4312418

WP2: Designing a framework for learning assessment through AI interaction analysis

In WP2, we developed a new taxonomy to assess evidence of learning through the analysis of how students interact with Generative AI (GenAI) tools in the context of writing argumentative philosophical essays. This approach shifts the focus away from the mere grading of the product (essay) and towards the examination of how students build their essay using AI. The taxonomy was designed to look inside the learning process by analyzing student interactions with GenAI, rather than just the end product. The framework's applicability was investigated in a pilot study involving three writing-intensive courses at TU/e, where students were permitted to utilize GenAI on the condition that their interaction data were recorded and provided.

Analysis utilizing this taxonomy revealed distinct patterns in student engagement with GenAI and their correlation with academic performance. Specifically, students who employed GenAI for higher-order cognitive tasks, such as conceptual generation, argumentation refinement, or critical reflection, demonstrated superior academic outcomes, as measured by both interaction log evaluations and traditional essay scores.

On the other hand, students who predominantly engaged with GenAI for lower-order tasks, such as information retrieval or surface-level textual correction, were associated with comparatively lower academic performance. These results indicate that the nature of student interaction with GenAI significantly influences learning processes and resultant outcomes. Consequently, this taxonomy serves as a valuable resource for educators to characterize diverse interaction strategies, facilitate assessment of learning processes extending beyond final outputs, and inform effective pedagogical strategies for integrating GenAI.

Recommendations

WP1

Recommendations based on the literature review from WP1

Higher education institutions should focus on redesigning their curricula and assessment methods to emphasize skills that AI cannot easily replicate, and possibly design novel courses that tackle the increasing need for AI literacy, critical thinking, and human-technology interaction ethics. This implies shifting from content-focused instruction to developing higher-order thinking skills through activities that are less easily offloaded to AI systems. It should be noted, however, that what constitutes desirable or undesirable use of AI ultimately depends on the intended learning objectives (ILOs) of a course. Investing in increasing AI literacy of teachers should facilitate the design of courses that are more harmoniously coexisting with the technology. In practice, this means a better alignment between ILOs, pedagogical activities and assessment approaches. One example could be a course where some activities involve learning how to responsibly co-write essays with generative AI followed by the assessment of the interaction between the student and the AI throughout the writing process (i.e., prompt analytics). If the ILOs emphasize the development of core competencies that AI systems have already demonstrated a high degree of capability in executing, yet simultaneously enable students to more effectively scrutinize and assess the outputs generated by AI, teachers should consider facilitating the teaching and evaluation of such skills within an AI-free pedagogical environment.

The majority of assessed student output in higher education is in verbal format, such as essays, reports, and presentations. This type of output is directly threatened by the extremely high capability of large language models (LLMs) and other generative AI tools to easily produce and manipulate verbal content. This requires a rethinking of how teachers assess learning.  With the increasing capabilities of AI to take over otherwise hard earned skills involved in thinking and writing tasks, assessment strategies should move away from traditional essays and exams toward performance-based evaluation methods that demonstrate authentic learning and application of knowledge. This includes implementing more real-time assessments like presentations, group projects, and case studies that require students to demonstrate critical thinking, problem-solving, and creativity in real-time while applying their knowledge in a given context.


WP2. How to assess learning through the analysis of student-GenAI interactions

To leverage this interaction taxonomy for assessing learning in contexts where GenAI is permitted, instructors must begin by explicitly allowing its use and establishing clear parameters. Crucially, this approach necessitates requiring students to submit their complete interaction logs alongside their final assignments. Evaluating these logs effectively requires a dedicated rubric, which should focus on aspects such as the strategic nature of student AI engagement, the criticality applied to AI outputs, the depth of inquiry demonstrated, and how interactions align with specific learning objectives. This process shifts assessment focus partly from the final product to the learning process itself.

Analysis utilizing the taxonomy has shown that the way students interact with GenAI is strongly linked to their academic outcomes. Interactions indicative of a collaborative intellectual partnership—such as posing original ideas for feedback, soliciting counter arguments, or engaging in substantive draft refinement—were associated with higher performance and better essay grades. In contrast, more instrumental uses, like basic information retrieval or simple structural corrections, correlated with lower outcomes. Understanding these patterns is vital for guiding students toward more effective AI use. Based on these findings, we offer the following recommendations for practice:

  • Explicitly integrate and define GenAI's role: Clearly articulate how GenAI can and should be used in your course to align with learning goals.
  • Require and evaluate interaction logs: Implement the submission of logs as a mandatory component of assessment to gain visibility into the learning process and inform feedback.
  • Design tasks promoting higher order interaction: Develop assignments that encourage students to use GenAI for complex tasks like brainstorming, exploring arguments, and critical revision, rather than just basic editing or information gathering.
  • Consider process based assessment: Supplement traditional essay grading with criteria in your rubric that reward strategic and critical engagement within the interaction logs.
  • Provide explicit guidance on effective AI use: Teach students how to interact with GenAI in ways that support deeper learning and academic development.

WORK IN PROGRESS...

  • WP3: Pilot studies using GenAI based tutoring applications in the classroom
  • WP4: Workshops on GenAI tutor building, AI literacy, Co-working with GenAI

Practical outcomes

WP1

The insights gained from the systematic scoping review of the literature aiming to gather information on the state-of-the-art perspectives and interventions regarding  future-oriented learning objectives and AI-compatible assessment methods, informed the design of pilot studies at TU/e assessing the impact of using Generative AI chatbots on a range of outcomes ranging from student learning to teaching activities.


WP2. Assessing student interactions with GenAI in the context of academic writing 

A key finding informing practice is that certain types of student interactions, such as providing original content ideas, soliciting counterarguments from the AI, or requesting substantive draft improvements, are strongly associated with higher academic performance. By analyzing interaction logs through the lens of this taxonomy, teachers can gain valuable insights into student strategies, which allows them to provide targeted feedback that guides students toward more effective, higher order engagement with GenAI to enhance their learning and writing skill development.

WORK IN PROGRESS...

  • WP3: Pilot studies using GenAI based tutoring applications in the classroom
  • WP4: Workshops on GenAI tutor building, AI literacy, Co-working with GenAI