Policy for kunstig intelligens ved OsloMet - Student

Web Content Display

Policy on the use of artificial intelligence at OsloMet

Artificial intelligence (AI) provides new opportunities in a number of areas. The technology affects teaching, research and how we carry out and manage our duties.

New ethical, educational and technological questions are arising in line with developments in the field of AI. To navigate the changing landscape and prepare for the unknown, the University has drawn up an overarching policy on the use of artificial intelligence at OsloMet.

Through this policy, OsloMet seeks to promote the use of AI, while also presenting some key factors that must be taken into account when considering adopting and using various AI tools. This policy is overarching and intended to provide guidance for more specific advice for the University’s employees and students.

Artificial intelligence is evolving rapidly. The policy is a general policy and is not limited to specific types of AI technologies or tools.

At OsloMet, we will:

  • explore how artificial intelligence can enrich our educational approaches and improve our research methodologies and administrative services.

  • use artificial intelligence in an ethical, safe and responsible manner. We will provide guidance services and encourage critical reflection. This policy is intended to contribute to the above.

  • facilitate students and employees to develop expertise in the use of artificial intelligence and provide access to the necessary information relating to AI tools.

  • collaborate nationally and internationally to share experiences of responsible and appropriate use of artificial intelligence.

As we encourage the development and use of AI, we must also be aware of the risks, especially when using new technology.  Particular attention must be paid to:

  • technology under development: Remember that AI and associated tools are under development. Many tools are released to the market while they are still being tested. Be critical about what the tool can do, how accurate it is and whether it can be used for the intended purpose.
  • the limitations of tools: Be aware of the limitations of the tools and the fact that they may generate inaccurate, incorrect, or random results. Tools can be used to identify patterns or correlations in data, but will not necessarily be able to distinguish between direct causation and correlation. The tools may not be able to understand general knowledge, prioritise information or interpret subtle language details. The data produced by the tools must be interpreted and applied in a way that considers the context and the limitations of the tools.
  • transparency: Be transparent about the use of AI tools and use correct references. Avoid plagiarism by checking information and specifying the use of both AI and the sources provided by AI.
  • source criticism: Source criticism is crucial when using AI. Just as an online search can yield links to old information, AI tools can provide information that is outdated, incorrect or generated without any substantial sources (hallucination). Always evaluate the sources used by the tool and fact-check any data produced by the tool.
  • bias and discrimination: The tools are only as good as the data they have been trained on. If there are biases in the data the tools have been trained on, such bias may be reproduced by the tool. This could, for example, result in bias when it comes to cultural values and discriminatory attitudes.
  • replication and “the black box”: One challenge associated with some AI tools is the lack of transparency and how the systems generate results or draw conclusions. It can be difficult to explain precisely how an AI conclusion was reached. It is important to be aware of this issue.
  • human responsibility: Remember, we are responsible for whether and how we use AI and whether and how we use the results generated by AI. If we use AI for decision-making support, we need to understand how the AI tools work and assume responsibility for the decisions that are made. AI can be misused or used in ways that have unintended consequences. AI must always be used together with human judgement and contextual understanding.
  • protection of privacy: Make sure that you know which AI tools can be used with which types of data. If, for example, you are going to process personal data, you need to make sure that the tool you use has been approved for the type of data you will be processing.
  • sustainability: Storage and data processing require substantial amounts of energy. Remember that the larger the models and the more complex the algorithms, the more energy they will require.
  • general laws and regulations apply: Existing laws, regulations and guidelines also apply to the use of AI. Pay special attention to regulations relating to data protection, information security and cheating/plagiarism.

Technological developments mean that we must constantly assess the situation and engage in ongoing dialogue at the University regarding the expertise we need and how technology will be used. This policy provides a framework for the ethical and responsible use of AI at OsloMet and provides direction for more specific advice and guidance for the University’s employees and students.