Navigation auf uzh.ch

Suche

Background Information on Generative AI

AI-generated image of UZH, generated in Deep Dream Generator based on a photograph of the main building.

Generative Artificial intelligence (AI) tools create content such as text, images, code, music or video in just a few seconds on the basis of patterns and structures from the training data used.

Generated from data

Depending on the content, different data are used to train generative AI algorithms. One example are large language models (LLMs) which specialise in textual analysis, text processing and text generation and are trained on very large datasets. The output generated is based on plausibility and probability; however, generative AI does not understand categories such as “real” vs “fake” or “true” vs “false”. As a result, the output is often inaccurate, biased or even made up (known as hallucinations or confabulations). While the potential of generative AI, which is becoming ever more powerful, can be harnessed for constructive and beneficial ends, it can just as easily be used for destructive and harmful purposes.

Opportunities and risks of generative AI

The responsible and informed use of generative AI in terms of assistance systems can also be very helpful in a scientific/academic context, e.g. from brainstorming, identifying topics and refining questions to getting an initial overview, generating code, illustrations, tables or presentations, or for troubleshooting and editing.

However, the rapid and ongoing (further) development of a growing number of generative AI tools presents various challenges and risks in both teaching and research. These primarily concern the concept of authorship (and thus also copyright), new forms of scientific misconduct involving undeclared or non-transparent use of generative AI tools, the very credibility of science and scientific publications in general, the significant misuse potential for (e.g. anti-scientific) disinformation campaigns, and the transmission of biases.

Limitations of generative AI

Against this backdrop, it is clear that responsible use of generative AI tools also requires knowledge of their limitations:

  • Content errors and factual mistakes: Generative AI may provide inaccurate or incorrect information.
  • Hallucinations: Generative AI may deliver output that sounds plausible but is simply fabricated.
  • Incorrect indication of source: References are very often incorrect or invented as generative AI is usually not yet able to correctly cite texts and other information.
  • Non-transparent origin of training data: It is not usually possible to identify what training data was used to develop the output. The associated copyright issues are still largely unresolved. The lack of transparency concerning the underlying training data can also obscure bias.
  • Reproduction of bias: If a certain scientific perspective or other bias predominates in the training data used, the output will reproduce this partiality.
  • Limited understanding of context: Generative AI often fails to comprehend the context or the significance of a question or text, or does so in a way that is heavily dependent on the relevant prompt. This may cause the output to vary widely, or to appear nonsensical or incorrect.

An initial conclusion

Informed use of generative AI that is in line with academic integrity not only requires practice with the relevant tools and knowledge of their limitations, but also a deep understanding of the respective academic field, as users must have the ability to check and assess the factual accuracy of all statements independently of AI.

Weiterführende Informationen

Explanation of Terms

Helpful terminological clarifications can be found on the following website: