AI in Teaching and Learning

Recommendations for the Use of AI-based Applications

This webpage has been translated using DeepL, if not otherwise stated. The machine-translated version of the template for Declarations of Authenticy has been revised by the university's translation service.

Background

Tools have always been used to support processes of academic work. Whether applications based on generative artificial intelligence should be counted among those tools and to what extent their use may be permitted in which areas of teaching, learning and examination are frequently debated topics at present.

Focus of the guidelines

The recommendations at hand provide orientation regarding initial legal assessments in dealing with generative AI-based applications in the context of teaching, learning and examination as well as regarding general requirements for autonomy, diligence and transparency. Among other things, the recommendations contain templates for Declarations of Authenticity and course agreements which can be used optionally.

The recommendations particularly focus on the composition of written work in the context of teaching, learning and examination and provide information for teaching staff as well as students. Teachers will e.g. find tips on redesigning the framework conditions for their classes. Didactic aspects of the use of AI-based applications and practical suggestions are addressed, but dealt with in more detail elsewhere. Links to relevant sources of information are provided in the corresponding passages.

The technical development of generative AI tools is progressing continuously and entails, among other things, legal changes. The content of these recommendations is therefore subject to change and will be regularly reviewed and updated.

The university's position

It is certain that AI-based applications are becoming an integral part of everyday life, education and professional life. What is less certain is the best possible way to integrate these developments into the fields of studying, teaching and examination. On the one hand, the use of generative AI is said to have a disruptive potential, while on the other hand it offers a wide range of opportunities.

As with all tools used in an academic context, AI-based applications must be used conscientiously, responsibly and transparently, i. e. in line with good academic practice. Building on this, Osnabrück University is committed to a critically reflective and open attitude towards the innovation potential of generative AI and its use in an academic context. These recommendations provide the necessary points of reference for this in practice.

Basics in dealing with generative AI

Just like many academic disciplines, the term "artificial intelligence" (AI) cannot be clearly and sharply defined. Rather, AI must be identified with the entirety of methods and applications accepted by the specialist community.

According to the  European Parliament, AI is "the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity". A key feature of AI-based tools is their ability to perform specific tasks more efficiently than natural intelligence does. In order to assist teachers and students in their everyday work, AI-based tools can, under certain conditions, perform tasks more efficiently with at least the same quality of the results. In addition to the specific goal of such a shift, the task and the quality of the result should always be defined beforehand.

Definition of Terms

AI-based applications, also known as AI tools, are software applications which can analyze large amounts of data very fast. Answers to queries are generated based on the evaluation of the data available. The evaluation takes place in processes that have been "trained" beforehand. Various methods and different types of data sets may have been used for this. It is also possible for AI-based applications to improve the quality of the output by using the data entered after their release in order to go on "learning". However, AI-based applications are not able to understand the content of the processed data, as they only work with hit probabilities in order to predict the best possible results based on the data available.

Generative AI describes systems that use machine learning methods and large amounts of data to react to input and generate complex content. AI language models, also known as Large Language Models (LLMs), are currently the best known examples. These models can recognize, process and generate human language. In addition to text-generating AI systems, generative AI-based applications also include image-generating AI systems or AI systems that generate videos, programming code, etc. This is why the term "Foundation Model" is now more regularly used instead of "Large Language Model".

More information on the topic  What is AI? is provided in German via the Digitale Lehre Portal.

Responsibility in dealing with AI-based applications

Osnabrück University encourages teaching staff to have an open mind about generative AI tools and to actively integrate them into their teaching practice if the conditions allow it (see  Legal assessments). This may be done in different ways depending on the subject and course. AI-based applications can e.g. be addressed as learning objects, reflected upon against the background of the subject or used to perform tasks. The  Digitale Lehre Portal offers suggestions for specific questions, application scenarios etc. in German.

For reasons of data protection it is important to note that students may not generally be obliged to use AI-based applications (see  Data protection). Services and tools which are provided (hosted) by the university itself or can be used via an interface accepted by the UOS constitute an exception to this rule. A list of corresponding  AI-based applications with  information on their use in German is updated regularly.

In principle, it is important to support particularly students in developing a critical and reflective attitude when using AI-based applications: It is necessary to understand e.g. that text-generating AI systems are not intelligent dialog partners which retrieve documented knowledge from a database, but text generators that perform a probability-based calculation of word combinations on the basis of training data.

With regard to the training data, providers of AI systems do not always operate transparently (see the  comparison of different applications in terms of their transparency). The training data may e.g. only include data sets used for basic training before the release of an application, or it may also include content that users enter into the application after its release. The quality of the data used for generating output has a significant influence on the quality of the output and can e.g. lead to the reproduction of biases contained in the training data.

Definition of Terms

The output is the delivered result of a generative AI application in response to a query. The result is based on a probability-based calculation, which is why it may not be possible to exactly reproduce the answer to any query.

The quality of the output can be significantly influenced by elaborate prompting, i.e. giving instructions to an AI-based application. The term prompt refers to the specific request made by the user to the AI-based application. In addition to the central question, a prompt may also contain source material and instructions for processing the request.

The term "hallucinating " is sometimes used when AI systems produce results that have no equivalent in reality. This may be content or source information which is part of the output and corresponds to the pattern with which an AI system has been trained, but which cannot be evaluated as factually correct. This happens beacause AI-based tools generate results based on probabilities without being able to understand the content.

A bias exists when training data contains prejudices e.g. based on skin color, gender or religious affiliation. These are reflected in the output due to the functional principle of word-for-word probability calculation. The same applies to correlations that appear in the training data. For example, if the term 'professor' is statistically more frequently combined with male designations, the output is also more likely to contain such a combination.

 More information on terminology is provided in German by the Digitale Lehre Portal.

In addition to conveying the knowledge that AI-generated results must always be checked and verified, it is also important to raise awareness of the negative and positive effects that the use of AI-based tools can have on a student's personal learning progress. Reflective work with AI-based applications can support the acquisition of individual skills, such as the development of problem-solving strategies or an improvement in the ability to verbally express oneself. The range of AI-based applicatoins includes tools that can support students in creating individual learning plans or teaching staff in composing course schedules. The Digitale Lehre Portal provides examples of such  tools and various  usage examples (in German). The  KI-Campus, a learning platform for artificial intelligence funded by the BMBF, also offers a wide range of free online courses, podcasts, blog posts etc. to strengthen one's AI skills.

To enable teaching staff and students to meet the changing requirements and explore new potentials, a variety of materials and courses to help acquire new skills are currently created in the higher education landscape. These include e.g. internal training courses and guidelines, as well as event recordings and best practice examples from other academic institutions. A  list of available materials and courses in German is updated regularly.

Osnabrück University is taking a range of measures to offer all members of the UOS support in learning how to use AI-based applications responsibly. In addition to the opportunities and risks of using AI-based applications, ecological, social and economic implications will also be addressed in the future.

Legal assessments

Legislation regarding the use of generative AI is currently still incomplete. In addition to the General Data Protection Regulation (GDPR) and the Act on Copyright and Related Acts (UrhG), various directives and ordinances currently form the legal framework for the use of AI-based applications in the fields of teaching, learning and examination in Germany. Among other things, they define the protectability of works as well as the user's responsibilities and obligations.

The  AI Act published in the official gazette of the European Union on 12.07.2024 is intended to provide more clarity. According to Article 113 of the regulation, it came into effect twenty days after its publication and applies 24 months later, parts of it after 6, 12 or 36 months, respectively. The AI Act describes, among other things, a transparency obligation relating to the labeling of AI-created content.

Unless otherwise stated, the legal assessments presented here are largely based on the paper  Rechtliche Aspekte des Einsatzes von KI in Studium, Lehre und Prüfung (2023) by Janine Horn and will be updated regularly.

The following table lists existing laws from which assessments for dealing with AI-based applications can be derived.

Table of existing laws from which assessments for dealing with AI-based applications can be derived

Laws and recommendations

What is regulated?

Act on Copyright and Related Acts (UrhG)

  • Copyright protection
  • Rights of use to input, output and training data

AI Act

  • Responsibility and obligations of providers and operators of AI systems
  • Protection of persons affected by AI systems

AI Liability Directive

  • Fault-based liability of providers and users of AI systems

General terms and conditions (GTC) of AI providers

  • Labeling obligations
  • Compliance with ethical principles
  • Rights of use and liability regarding the output

General Data Protection Regulation (GDPR)

  • Responsibility and obligations for data processing
  • Rights of data subjects if an AI is fed with personal data, if it uses this data or if it serves as the basis for personal decisions

Federal Data Protection Act (BDSG)

  • fills in opening clauses of the GDPR
  • only applies insofar as the GDPR does not apply directly

Lower Saxony Data Protection Act (NDSG)

  • applies in addition to the GDPR
  • covers all situations in which personal data is processed by public bodies in the state of Lower Saxony

Position of the State Commissioner for Data Protection (LfD) of Lower Saxony

  • Processing of personal data in the supervision of electronic face-to-face and remote examinations
  • AI-based automated decisions in individual cases

Lower Saxony Higher Education Act (NHG) in conjunction with university regulations

  • Autonony of the examination performance
  • Permitted examination types and forms
  • Electronic examination forms and their supervision (electronic remote examinations)
  • Permitted processing of personal data for examination purposes

Guideline for the use of AI-based applications at Osnabrück University

  • Principles of dealing with generative AI

So far, AI-generated content has not generally been protected by copyright, as this would require human creation according to the Act on Copyright and Related Acts (UrhG). This leads to the following discrepancy: an AI-based application cannot legally be considered the author of the content it generated; likewise, the users of the AI-based application cannot be considered the authors of such content if they do not have sufficient creative influence on the creation of the work. Nor does the provider of a generative AI application usually hold any rights under the Act on Copyright or related rights. Nevertheless, users can theoretically be authors of AI-generated content if they have sufficient influence on the generated content within the creation process. The same applies if they further process or arrange AI-generated content. A distinction is made between:

  • exclusively AI-generated, non-protectable content
  • AI-supported, human, protectable creations (where AI is only used as an aid)


See paragraphs on protectability and authorship according to UrhG:  Horn, 2023, page 2 and following.

One challenge with regard to AI-based applications is the fact that a duty of care in relation to their use cannot yet be defined due to the complex processes of AI architectures  (Horn, 2023, page 15). The publication of AI-generated content that contains third-party copyrighted material constitutes an infringement. It would have to be possible in the first place, however, to prove a breach of the duty of care, e.g. by failing to label AI-generated content within a text. This is made difficult by the fact that AI-based applications may not indicate all sources used in the manner required by the academic code of practice.

Although AI-generated content is generally free to use, it may contain copyrighted material, it may be based on training data that contains copyrighted material or it may contain false information without any of this being apparent to the user. The  AI Act is intended to eliminate this legal uncertainty. Irrespective of this, AI-generated content can be used without the consent of the copyrightholder, if it ist done in accordance with the following legal permissions ( Horn, 2023, page 6):

  • Right to quote, § 51 UrhG
  • Caricature, parody and pastiche, Section 51a UrhG
  • Copying and sharing for teaching purposes, Section 60a UrhG
  • Copying in teaching media, § 60b UrhG
  • Copying and sharing for scientific research, Section 60c UrhG
  • Use of database content for teaching and research purposes, Section 87c (1) No. 2 and No. 3 UrhG

If AI-based applications are used to generate texts, images, programming code, etc., the users have certain responsibilities. The generated content must be independently verified and checked for errors and e.g. inflammatory or discriminatory statements. Normative over- or underrepresentations and omissions, both in relation to persons (groups) and to scientific and other content, also have to be checked by the users if necessary.

Teaching staff members and students who utilize AI-generated results unchecked and unlabeled run the risk of not identifying sources with sufficient care. AI-generated content should therefore generally be labeled as such in the context of teaching, learning and examination. Although there is currently no obligation under the Act on Copyright and Related Acts (UrhG) to label AI-generated content, an obligation may arise e.g. from the general terms of use of providers of AI-based applications  (Horn, 2023, page 15).

Universities may also define labeling obligations by means of internal regulations since it may be difficult to deduce from conventional Declarations of Authenticity how e.g. the use of AI-based applications is to be cited in scientific papers. The  data protection section offers suggestions for the modification of Declarations of Authenticity, that can be carried out as required in courses, schools or departments.

Attempted examination infringements

Another challenge in dealing with generative AI applications is that it is difficult to prove whether and to what extent students use them in text production, e.g. to write term papers. However, legally binding verifiability seems just as unlikely as a timely solution does.

A sensible approach could therefore be to focus on existing legal requirements, such as the duty of care and labeling, combined with an internal definition and communication of specific regulations for the use of generative AI applications in the context of teaching, learning and examination. For example, agreements can be concluded on the tools and procedures permitted or not permitted in a course (see  Use of AI in teaching and learning) or adapted Declarations of Autonomy can be used in the context of examinations (see  Autonomy in the context (written) coursework).

Due to the general inadmissibility of entering works to which third parties hold rights into a chat application like ChatGPT, it is also not permitted for teaching staff to enter students' examination results in search engines or into plagiarism detection software that is not provided by the UOS.


If teaching staff members use AI applications in class, it must be in compliance with data protection regulations. As soon as students' personal data is processed by an external service, a data processing agreement in accordance with Article 28 of the GDPR is required, which would oblige the university to carry out appropriate checks (see  Horn, 2023, page 14). Teachers are therefore e.g. not allowed to compel their students to create an account and use ChatGPT, Gemini or other chatbots in order to complete tasks with their help.

A possible alternative is to use applications that are operated (hosted) by the university. You can find out more about these applications and the data protection-compliant settings they offer on the  Digitale Lehre Portal (in German).

The use of AI-based tools from external providers is possible, however, if teaching staff members ensure that the use is voluntary for the students and that there are no disadvantages for those who choose not to use them.

The use of generative AI applications requires special care on the part of users with regard to the protection of personal data and personal rights when entering content and prompts. According to the GDPR, "any information that relates to an identified or identifiable living individual" constitutes personal data  (European Commission website: What is personal data?). This includes surnames and first names, email addresses containing first names and surnames, private addresses and health information. Such third-party data may not be entered into AI-based applications, nor may images, video or audio recordings of them.

Autonomy in the context of (written) coursework

At university students are familiarized with the practices of academic work and trained in them. The autonomous writing of academic texts is an essential part of this. Students as well as teaching staff must act in a legally compliant and ethically responsible manner in accordance with good scientific practice. Some AI-based applications can be useful for academic work, provided that users are aware of possible risks and use the tools appropriately.

Despite the fact that the legal situation regarding the use of AI-based applications is not yet fully clarified (see  Legal assessments), students are obliged to complete their examinations autonomously and to provide informaton on all types of aids they make use of. Which (AI) aids are permissible in the given context and how their use is to be made transparent must be defined and communicated in advance by the teaching staff or school. For this purpose, the use of a Declaration of Autonomy adapted to the respective requirements of the subject and the examination format is recommended (see  Template for Declaration of Autonomy).

Teaching staff members are also called upon to critically review examination requirements and examination tasks. In the long term, it seems necessary to reflect on course-related examination requirements both in terms of didactics and formal aspects and, if necessary, adapt them to new requirements. Teaching staff can find suggestions for practical implementation in various  continuing education courses offered by Higher Education Didactics.

The specific expectations of teaching staff concerning the autonomy of students can vary depending on the course, the examination context and the associated teaching/learning objectives, but must always be communicated transparently and reliably. Teachers should specify in writing which resources are available to the students, how the use of AI-based apllications is to be documented and what role the teaching staff plays in the process. Teachers may e.g. define the form of autonomy expected. With regard to the use of AI-based tools, they can define or limit the permitted scope e.g. by creating a positive list (permitted tools), formulating a general permission or a general prohibition and sharing it within the course in written form. Specific suggestions and guidance for dealing with and using AI-based applications can be found in the section on  the use of AI in teaching and learning.

Composing written work enables students to engage intensively with a (subject-)specific topic and allows them to practise the academic code of conduct. In addition, it is used to achieve academic or examination grades, which in turn represent the results of the respective learning or writing processes. These processes are individual and include different approaches, solutions and tools, the admissibility of which is subject to the principles of good scientific practice. It is not permissible to have a thesis created entirely by others - be it humans or digital tools such as AI-based applications.

In accordance with the General Examination Regulations (APO Section 15 (4); access via   Zugangs-, Zulassungs- und Prüfungsordnungen, as of May 2024) the undokumented use of AI-based applications or the use of unpermitted AI-based applications is to be assessed as an act of examination infringement, even if the act itself is not easy to prove. In such cases, examination infringement will result in being awarded a fail grade (5.0) for the corresponding examination or coursework.

It is, however, legitimate to obtain feedback on sub-processes, as this practice has a positive effect on the analysis of the content. This also includes being asked questions with the help of an AI-based application in order to practice formulating answers without the help of aids.

Information on the use of AI-based applications in the process of composing coursework should be just as mandatory as information on all other relevant sources and aids. The Modern Language Association of America (MLA) e.g. defines  three rules for citing AI-based applications in this context. Teachers can recommend these or similar rules to their students for guidance. A German translation of the basic principles as well as suggestions on how AI-based applications can be cited when writing a paper can be found in the guide  'Citing AI Tools' from the University of Basel.

Due to the heterogeneity of the univerity's subjects and schools, the requirements for examination situations and Declarations of Authenticity may vary. However, Declarations of Authenticity should always be supplemented by a documentation of the use and mention of AI-based applications. The use or permissibility of using AI-based applications should be discussed with students prior to examinations.

Osnabrück University is also in favor of aligning Declarations of Authenticity with the learning objectives and examination requirements of the respective courses as well as expanding Declarations of Authenticity for written examinations: Students should confirm that the written and electronic versions of the work are identical and that they are aware that violations of the Declaration of Authenticity constitute an attempt of examination infringement, which generally results in being awarded a fail grade for the examination.

The following template is based on a Declaraton of Authenticity from the Institute of Cognitive Science (IKW), was composed in consultation with the Examination Administration Coordination Office and the Legal Department of Osnabrück University and can serve as a modifiable basis.

___________________________________________

Surname, first name(s) (in block capitals), date of birth

 

I hereby declare that I have written the following coursework [placeholder, title of examination paper] or have clearly marked the part of the coursework [placeholder, title of examination paper] that is my own work.

This coursework is my own work and demonstrates my level of knowledge, my own understanding and my own viewpoints.

I confirm that I have used only the permitted and cited study aids. If I used AI applications, I did so in such a limited way so as not to jeopardize the independent authorship of this coursework.

I confirm that I have identified the use of AI-based tools in all cases. I have listed the AI tools used in the list 'Overview of Tools Used'. In the appendix I have documented

- all of the prompts used by me in the coursework and/or

- cited all AI-generated output used in the coursework in each individual case.

 

Furthermore, I confirm that the written and electronic versions of the examination paper [placeholder, title examination paper] are identical.

I am aware that violating the content of this declaration constitute an attempt to cheat, which shall in principle result in me being awarded a fail grade for this examination.

 

____________________

Place, date and signature

The AI-based tools used for composing coursework and the extent to which they are used can be documented in various ways. Lists and tables are particularly useful. A simple list can provide information on whether AI-based applications were used and if so, for what purpose, e.g. for:

  • brainstorming
  • research
  • translation
  • visualization
  • generating programming code
  • for editing images

This type of documentation is especially suitable for shorter written assignments or presentations and can be created independently by the students.

A table can be used to record the type and scope of use of AI-based applications even more precisely. It is advisable for teaching staff to provide categories of required information (e.g. the steps in the process of composing a written paper).

Work step

AI-based application(s) used

Type and scope of use

Brainstorming

ChatGPT

Comparison of own ideas with subsequently AI-generated suggestions and adoption of two aspects (chat history documented in the appendix)

Research

Perplexity

Research on the historical background (chat history documented in the appendix)

Formulation

Chat GPT

Linguistic revision of the entire text

translation

DeepL

Translation of the source text referenced in chapter 3, paragraph 2

visualization

Mindverse

Generating an idea to visualize the process described in chapter 4, section 5; intensive independent revision of the presentation

Outline

...

...

Another example of such a table can be found in the  'Declaration on the use of generative AI systems' from the University of Hohenheim and in the guide  'Citing AI Tools' from the University of Basel.

Use of AI in teaching and learning

It is unavoidable for teachers to adapt their courses and examination requirements to changing conditions from time to time. This is also the case with regard to the use of AI-based applications.

In order to communicate their expectations of students' performance in courses and examinations transparently, teachers can define learning objectives, name permitted aids and formulate precise expectations before the start of a course. A list of rules or a course agreement that teachers make available to students can be used for this purpose. The advantage of such documents is even greater if they are created together with the students at the beginning of the course (see section   Templates for course agreements).

The Digitale Lehre Portal offers  further information on this topic for teaching staff.

By defining rules in writing, students gain a certain degree of confidence in dealing with AI-based applications. However, they may also need further support in acquiring methodological AI skills, because further questions may arise during self-study, when writing examination scripts, or because they have different levels of knowledge at the beginning of the course. Teachers can support their students by referring them to additional in-depth information or courses.

Support services for students

The  micromodule 'Willkommen im KI-Dschungel', created at the Center for Digital Teaching, Information Management and Higher Education Didactics (virtUOS), provides training on the risks and opportunities of using AI-based applications. In this three-part self-study module, participants learn to independently identify and minimize possible risks in the context of academic work that they may encounter in the "digital jungle". The module is aimed in particular at users with no previous knowledge to train them in using AI-based tools effectively and scientifically and to raise their awareness for risks and misinterpretations.

The  Writing Studio at the University Language Center is currently composing information on the use of AI-based applications in the context of writing academic papers, that is explicitly aimed at students. Workshops on 'AI and academic writing' are held regularly and partly in cooperation with the University Library, so that they include information on the use of AI-based applications in the context of research. The use of AI tools can also be discussed in individual writing consultations, provided that the basic approach has been clarified beforehand.

The online course  Sprachassistenzen als Chance für die Hochschullehre, offered by the KI-Campus, includes the module 'KI-Sprachwerkzeuge beim wissenschaftlichen Schreiben'. This module explains how the use of AI-based aplications influences the process of academic writing. Students will also find information on how to use AI tools responsibly in order to generate ideas, how to structure and improve written texts etc.

The design of courses goes hand in hand with the examination requirements for the students. The following tables provide examples for adapting teaching/learning settings against the background of AI-based applications by formulating learning objectives.

Table 1: Examples for adapting teaching/learning settings against the background of AI

What specialist skills should students acquire?

Exemplary teaching/learning setting for skills acquisition

Ensuring the independent performance of students

Possible forms of examination

1) Basic competence: Understanding and independently reproducing
content, e.g. listing, naming, describing ...

 

Students read a text on a technical term and explain it (in writing) using their own words.

Move the location of the task from the self-study to the face-to-face session.

Written exam on site;
oral exam

2) Transfer competence: Making connections, e.g. comparing, applying, solving, analyzing, structuring, determining, distinguishing, implementing ...

 

Students compare different definitions of a technical term.

Move the task from the self-study session to the classroom session.

 

Have the students formulate an explanation of how they used AI when working on the task in self-study (process documentation/presentation).

On-site written exam;

oral examination;

Homework supplemented by a process documentation (portfolio)
or presentation

3) Reflection competence:
comment on topics, e.g. evaluate, assess, develop, construct, decide, plan, develop...

Students take a detailed position on a provocative thesis, in written form and in accordance with academic standards.

Move the location of the task from the self-study session to the face-to-face session.

 

Have the students formulate an explanation of how they used AI when working on the task in self-study (process documentation/presentation).

 

Have the students give each other peer feedback and ask them to revise their text on this basis.

 

Term paper supplemented by a process documentation (portfolio) or presentation
or oral examination

Table 2: Integration of learning objectives in dealing with AI in a task

What methodological skills should students acquire in relation to AI?

Possible tasks for acquiring skills.

The students ...

1) Basic competence:
e.g. understand an AI tool as an assistant

... inform themselves about the functionalities of generative AI applications, their opportunities and challenges.

... read a text generated by an AI application on a technical term and explain the structure.

2) Transfer competence:
e.g. use an AI tool as an assistant

 

... formulate suitable prompts.

... document the use of an AI application in the form of prompts when composing their written work.

3) Reflect and evaluate:
e.g. critically reflect on their own use of AI tools

 

... reflect critically on the AI-generated output, drawing on their own specialist knowledge.

... reflect on the application of a GPT model in their own work process.


Ulrike Hanke's  "Lernen und Prüfen in einer Welt mit ChatGPT mit Hilfe der Lernzieltaxonomy" and Annette Glathe, Jan Hansen, Martina Mörth and Anja Riedel's  "Vorschläge für Eigenständigkeitserklärungen bei möglicher Nutzung von KI-Tools" served as a guide when creating the tables.

A course agreement regulates which tools and media may be used in a specific course and to what extent. It may also contain more individual agreements. If possible, students should be involved in designing the content of teh agreement in order to make the teaching and examination processes transparent and convey reliability in the (collaborative) work to all persons involved. Christian Spannagel's  Rules for Tools and the following Student Guide created at the Institute of Cognitive Science (IKW) are two examples of what such a course agreement can look like.

Student Guide

  1. Respect the law, university, and examination regulations. Before using AI-based tools (like ChatGPT), get familiar with and comply with the rules for the respective course. Additionally, inform yourself about using the tool responsibly without violating laws (e.g. copyright). Lastly, make it clear when a text/passage was produced by AI or even what was fed into the AI.
  2. Use AI to support your learning, not to replace it! AI tools can enhance your learning but also potentially think for you (e.g. writing essays for you). As you are studying to gain skills and knowledge, learning how to use it in a sensible, supporting way is crucial. Here are a few ideas on how to use a GPT model for your studies:
    1. Use the GPT model as a writing tool (e.g. generate essay headlines, paraphrasing, proofreading).
    2. Use the GPT model as a learning partner (e.g. create mind maps or flashcards, self-test knowledge, explain concepts).
    3. Iterate and converse with the GPT model (e.g. ask about earlier statements, clarification about specific terms).
    4. Summarize material with the GPT model (e.g. learning material, paper, videos).
    5. Boost your coding with the GPT model (e.g. as a syntax help, debugging, code explanation, code examples).
  3. Be aware of the risks when using AI tools.

In addition to the benefits for scientific research, numerous potential sources for errors and misconduct exist. Although a paragraph produced by language models appears realistic, it might contain inaccurate or nonsensical information. Additionally, AI tools may produce sources that, despite being wholly fake, have a convincing appearance and syntax. Therefore, always consider and verify the results of AI-generated models.

Before and when using a GPT model, please consider the following:

Ø Review the usage, acknowledgment, citation, and other guidelines in the university's or the course's rules and regulations for generative AI, large language models and ChatGPT.
Ø Understand the capabilities and limitations of ChatGPT.
Ø Determine whether the assignment involves acquiring fundamental information or whether the usage of ChatGPT is sensible.
Ø Take into account which subjects could be cleverly connected to yield novel insights.
Ø Verify that the results provided by ChatGPT are reliable, correct, and consistent.


Based on   "Unlocking the Power of Generative AI Models and Systems such as GPT-4 and ChatGPT for Higher Education A Guide for Students and Lecturers" by Gimpel et al. (2023).

The text of the Student Guide is presented here in its original English version provided by the IKW, including minor editorial changes.

Further resources

Below you will find sources of information and courses for further education offered by Osnabrück University or other universities and institutions, in online and offline formats. This selection represents only a small part of the possibilities currently available to expand your own knowledge and individual skills independently.

Further training options on campus

As part of the   higher education didactics qualification program for teaching staff, a workshop is usually held twice each year on the topic of competence-oriented testing - recording, reflecting and evaluating learning progress. Participants receive support in the preparation, implementation and grading of competence-oriented examinations. The use of generative AI tools will be discussed as well.

Online sources of information

Information from other universities
Information from other institutions
Information on the legal background

Further information on the legal background can be found, for example

Sources

"What is artificial intelligence and how is it used?", European Parliament (August 27, 2020),  https://www.europarl.europa.eu/topics/en/article/20200827STO85804/what-is-artificial-intelligence-and-how-is-it-used (accessed: March 2024).

Liesenfeld, Andreas/Lopez, Alianda/Dingemanse, Mark: "Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators." In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces. July 19-21, Eindhoven (2023). doi: 10.1145/3571884.3604316. Available online at:  https://opening-up-chatgpt.github.io/ (accessed March 2024).

Horn, Janine: "Rechtliche Aspekte des Einsatzes von KI in Studium, Lehre und Prüfung" (14.07.2023):  https://www.souveraenes-digitales-lehren-und-lernen.de/wp-content/uploads/2023/09/KI_Recht_14072023_V2.pdf

Hanke, Ulrike: "Lernen und Prüfen in einer Welt mit ChatGPT mit Hilfe der Lernzieltaxonomie", based on CC By 4.0: Learning objectives taxonomy in the revised form of Anderson, L.W. & Krathwohl, D. (2001). A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives. Addison Wesley. URL:  https://hochschuldidaktik-online.de/wp-content/uploads/sites/3/2023/02/Lernen-Pruefen-mitChatGPT-Lernzieltaxonomie_neu.pdf

Glathe, Anette/Hansen, Jan/Mörth, Martina/Riedel, Anja: "Vorschläge für Eigenständigkeitserklärungen bei möglicher Nutzung von KI-Tools" (as at: 25.8.2023):  https://www.dghd.de/die-dghd/downloads/