Routz Belgium Policy for the Use of Artificial Intelligence

Introduction

Routz Belgium is committed to full compliance with the applicable laws related to the use of artificial intelligence in the countries where Routz Belgium provides products and services.  In addition, Routz Belgium is committed to the ethical use of artificial intelligence. This Policy for the Use of Artificial Intelligence (“Policy”) outlines Routz Belgium’s requirements regarding the adoption of all forms of artificial intelligence at Routz. The adoption of artificial intelligence includes use for business efficiency, operations, and integration into Routz’s products and services.

This Policy applies to all Representatives of Routz, in accordance with the definition of AI systems as specified in ISO/IEC 22989:2022. The Management of Routz Belgium is responsible for compliance with and supervision of this Policy.

Definities

  • “Artificial intelligence” or “AI” means the use of machine learning technology, software, automation, and algorithms to perform tasks and make rules or predictions based on existing data sets and instructions.
  • “An artificial intelligence system” or “AI system” means software developed using one or more of the techniques and approaches listed in Appendix I that, for a given set of human-defined objectives, can generate outputs such as content, predictions, recommendations, or decisions that affect the environments in which they interact.
  • “Closed AI System” means an AI system where the input by a single user is used to train the AI model. User input data is kept separate from other users, and the data is considered more secure.
  • “Integrated AI Tools” means AI tools that are included in existing software tools that have been approved and are used by Routz Belgium and that do not require approval for use by the Management.
  • Generative AI is a form of artificial intelligence that automatically creates content based on questions or requests from users.
  • “Non-public Routz Belgium Data” means any information the disclosure of which could violate the privacy of individuals, government regulations or statutes, jeopardize Routz Belgium’s financial situation, damage its reputation, or reduce its competitive advantage.
  • “Open AI System” means an AI system where the input is used by all users to train the AI model. Input data of all users is not private and can be revealed to other users.
  • “Personal Information” means information that identifies, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular person or household.
  • “Routz Representatives” means all directors, officers, board members, employees, contractors, representatives, affiliates, agents, and any person or entity providing services for or on behalf of Routz.
  • “Management” means the representative of Routz Belgium who is responsible for AI.

Guiding principles

The intent of this Policy is to provide general guidance on the use of AI at Routz Belgium so that Routz Belgium can use AI as a tool while continuing to comply with legal obligations and act ethically. The use of AI at Routz Belgium should never compromise Routz Belgium’s core values or introduce unjustified risk to the organization. Instead, the use of AI at Routz Belgium should focus on improving business efficiency and strengthening Routz’s ability to fulfill his mission.

This Policy is not intended to address every use of AI at Routz Belgium by a representative of Routz Belgium. There are certain business departments and functions at Routz Belgium that involve more considerations and potential risks.

See also Prohibited Use in section 4 below for situations where AI may not be used with Routz Belgium and High Risk AI systems in section 6 below for situations where extreme caution is required when (considering) the use of AI.

In addition, certain Integrated AI tools have been used in existing approved Routz Belgium software that do not require additional approval for use. For example, the use of Microsoft Word in which Microsoft Word has integrated an AI tool to check spelling or grammar. The use of Integrated AI tools in approved software at Routz Belgium is permitted, provided that the software tools are in line with previous general business use

The following principles should be followed when considering the use of an AI system at Routz:

  1. AI systems are trained on data that can inherently contain biases. Users of these systems are responsible for reviewing AI-produced content for bias and correcting it if necessary.
  2. Non-public Rutz information should never be entered into an open AI system.
  3. All AI-generated content should be thoroughly reviewed by an individual with the expertise to evaluate such content for accuracy, as well as general proofreading and editing. This must be done in accordance with transparency and accountability requirements. AI-generated content should be viewed as a starting point, not the final product. Like all content at Routz Belgium, AI-generated content must match the appearance and identity of the Routzbrand.
  4. The use of an AI system must be documented to capture institutional knowledge. For example, if AI is used to create code and is included in a larger portion of the code, there should be documentation about what portion of the code was derived by AI and who reviewed it.
  5. The use of an AI system must comply with the conditions of use or contractual restrictions.

If, in some cases, Routz Belgium employees participate in the development of an AI system, the principles as described in the ISO 44001 standard are followed. If necessary, the checks will be further specified.

Prohibited Uses

There are certain uses of AI that are prohibited. Unless otherwise approved by the Management, Representatives of Routz Belgium are prohibited from using AI systems for any of the following activities:

  1. Using AI systems to identify or categorize students, candidates, employees, contractors, or other affiliated entities is prohibited.
  2. Entering trade secrets, confidential information, clauses with customers, sensitive information or personal data about individuals into an open AI system is strictly prohibited.
  3. Entering an individual’s sensitive information into an AI system. “Sensitive Information” includes medical, financial, political or ethnic origin, religious beliefs, gender, sexual orientation, or other private matters.
  4. Use an AI system to obtain legal advice, including but not limited to creating policies for internal use or to provide to third parties.
  5. The creation of intellectual property that Routz Belgium wishes to register and/or is of significant value to the organization.
  6. Using Routz Belgium’s time or resources to generate content with an AI system that would be considered illegal, inappropriate, harmful to Routz Belgium’s brand or reputation, or disrespectful to others.

Ethical guidelines

Routz Belgium wishes to act in an ethical manner when using AI, as specified in the more ethical guidelines of ISO/IEC 38505-1:2017, and ensures that all AI applications comply with these standards. Therefore, there may be applications of AI that are legally permissible but do not meet ethical requirements. Use of an AI system within Routz Belgium must comply with the following ethical guidelines:

  1. Integrity in Use: All users of AI systems should be honest about how AI has helped in doing the job. Even if you are using an approved AI system for an approved use, you need to make sure that your colleagues are aware of your use of the AI system. Don’t present AI-generation work as your own.
  2. Unauthorized Use: Do not use Routz Belgium’s time or resources to generate content with an AI system for personal use without prior approval from the Management.

High-risk use of AI systems

There are certain applications of AI systems that are riskier than others. As a company, Routz Belgium is committed to complying with all legal requirements and guidelines related to AI in the countries in which it operates. The European Union has classified the following potential applications of AI as posing a high risk to the health and safety or fundamental rights of natural persons. Therefore, there are several additional requirements for the use of AI systems in such cases. These requirements are listed in Annex II, with certain features highlighted below:

  • Personal data in AI systems: AI should be used with extreme caution when entering an individual’s personal data into a closed AI system (it is forbidden to put personal data in an open AI system).
  • Screening Applicants: Issues of equity and inclusion surrounding the use of AI in application processes are a potential source of disputes. The use of AI systems in screening is not recommended.
  • Staffing decisions: AI should be used with caution for any use related to promotional, retention, or similar staffing decisions. Extreme caution should be exercised to ensure that biases (including biases in existing datasets) are avoided.

General Standards and Permitted Use

Except for AI-embedded tools in approved software, all applications of AI systems must be approved by the Executive Board prior to use to ensure that such use of the AI system complies with the following principles:

  • Legal: The use of AI systems must comply with all applicable laws and regulations, as well as any contractual obligations, restrictions, or restrictions.
  • Ethical: The use of AI systems must adhere to ethical principles, be honest, and avoid bias.
  • Transparent: There should be clear guidelines and internal policies for using an AI system.
  • Necessary: the use of AI systems must be for a valid business purpose to improve Routz Belgium’s business efficiency. The use of AI is not a substitute for human critical thinking or expertise.

Mandatory knowledge

All employees of Routz Belgium who use or will use AI systems must familiarize themselves with this policy.

Report non-compliance

Directors, officers, employees and agents of Routz Belgium who are aware of any conduct that may violate this policy have the responsibility to report it. Individuals are encouraged to report this. Any reports of suspected misconduct or non-compliance will be investigated by the Management, Human Resources or other applicable parties. Unless acted in bad faith, employees of Routz Belgium will not be subject to retaliation for reporting possible violations.

If Routz Belgium determines that a Routz Belgium employee still does not comply with this policy after a completed investigation, completed training and allocated training time, the Routz Belgium employee may be subject to disciplinary measures.

Appendix I: AI techniques and approaches

Machine learning approaches, including supervised, non-supervised, and reinforcement learning, using a wide range of methods, including deep learning.

Annex II: EU requirements for High-risk systems

In order to mitigate the risks to users and affected persons from high-risk AI systems placed on the Union market or otherwise put into service, certain mandatory requirements should apply, taking into account the intended purpose of the use of the system and in accordance with the risk management system established by the provider.

Data Quality (43)

High-risk AI systems should be subject to requirements related to the quality of the data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight and robustness, accuracy and cybersecurity. Those requirements are necessary to reduce the risks to health, safety and fundamental rights that exist in the light of the intended purpose of the system where no other measures are reasonably available that are less restrictive of trade, thus avoiding unjustified restrictions on trade.

Access to Main-Value Datasets (44-45):

High-quality data is essential for the performance of many AI systems, in particular when techniques involving model training are used, to ensure that high-risk AI systems operate as intended and safe and do not become a source of discrimination prohibited under Union law. High-quality data sets for training, validation, and testing require the implementation of appropriate data management practices. Data sets for training, validation and testing must be sufficiently relevant, representative, error-free and complete for the intended purpose of the system. The datasets should also have the appropriate statistical characteristics, including with regard to the persons or groups of persons for whom the high-risk AI systems are to be used. In relation to datasets for training, validation and testing, to the extent required by their intended purpose, particular account should be taken of the characteristics, properties or elements specific to a particular geographical, behavioural or functional environment or context in which the AI-system is to be used. In order to protect the right of others against discrimination that may result from bias in AI systems, providers should also be able to process special categories of personal data where there is an important public interest, in order to ensure the monitoring, detection and correction of bias in relation to high-risk AI systems.

Information on how high-risk AI systems have been developed and how they perform over their lifetime is essential to verify compliance with the requirements of this Regulation. This requires the registration and availability of technical documentation containing the necessary data to assess the compliance of the AI-system with the relevant requirements. Such information should include the general features, capabilities and limitations of the system, as well as algorithms, data and processes used for training, testing and validation, as well as documentation related to the risk management system in question. The technical information must be kept up to date.

Administration and technical documentation (46)

Information on how high-risk AI systems have been developed and how they perform over their lifetime is essential to verify compliance with the requirements of this Regulation. This requires the registration and availability of technical documentation containing the necessary data to assess the compliance of the AI-system with the relevant requirements. Such information should include the general features, capabilities and limitations of the system, as well as algorithms, data and processes used for training, testing and validation, as well as documentation related to the risk management system in question. The technical information must be kept up to date.

Transparency (47):

In order to address the opacity that leads to certain AI systems being unfathomable or too complex for natural persons, a certain level of transparency should be required for high-risk AI systems. Users should be able to interpret the output of the system and use it accordingly. High-risk AI systems should therefore be accompanied by relevant documentation and instructions for use and contain concise and clear information, including, where appropriate, with regard to potential risks to fundamental rights and risks of discrimination.

Human oversight (48):

High-risk AI systems should be designed and developed in such a way that natural persons can supervise their operation. To this end, appropriate human oversight measures should be determined by the provider of the system before it is placed on the market or put into service. Such measures should ensure, where appropriate, in particular that the system is subject to built-in operational constraints that cannot be circumvented by the system itself, that the system responds to the human operator and that the natural persons entrusted with the task of human oversight have the necessary competences, training and authority to carry out that task.

Accuracy, robustness and cybersecurity (49-51):

High-risk AI systems must perform consistently throughout their lifecycle, achieving an appropriate level of accuracy, robustness and cybersecurity in line with the widely recognised state of the art. The level of accuracy and the standards of accuracy must be communicated to users.

Technical robustness is an essential requirement for high-risk AI systems. Those systems should be able to withstand risks related to the system’s limitations (e.g. errors, irregularities, unexpected situations) and malicious actions that may compromise the security of the AI-system and lead to harmful or otherwise undesirable behaviour. Failure to protect against these risks may lead to security impacts or negative impacts on fundamental rights, for example due to erroneous decisions or incorrect or biased output generated by the AI system.

Cybersecurity is crucial to ensure that AI systems are resilient to attempts by third parties that exploit the system’s vulnerabilities to modify its use, behaviour or performance or compromise its security features. Cyberattacks against AI systems can use AI-specific assets, such as data sets for training (e.g. data pollution) or trained models (e.g. hostile attacks), or exploit vulnerabilities in the digital assets of the AI system or the underlying ICT infrastructure. Therefore, in order to ensure a level of cybersecurity commensurate with the risks, providers of high-risk AI systems should take appropriate measures, also taking appropriate account of the underlying ICT infrastructure.

Source: EU Artificial Intelligence Regulation, Recitals 43-51