Version: 2.0 — Update date: February 11, 2025.
Polaria Tech, as a company specialized in artificial intelligence solutions, is committed to scrupulously complying with the European Regulation on Artificial Intelligence (commonly known as the AI Act). This compliance policy describes how Polaria Tech ensures the compliance of its Retrieval-Augmented Generation (RAG) AI chatbots with this regulatory framework, by defining roles and responsibilities, distinguishing between applicable risk levels, and by implementing the required transparency obligations.
Polaria Tech designs and delivers chatbots based on AI (in particular via RAG technology). These solutions correspond to the definition of artificial intelligence systems (AIS) given by the European Regulation — that is, autonomous and adaptive software systems capable of generating predictions, content or decisions based on data. As such, Polaria Tech acts as an AI system provider within the meaning of the AI Act, i.e. the entity that develops an AI system and markets it under its own name or brand.
This policy applies to all AI products and services offered by Polaria Tech.
Polaria Tech has a strong commitment to regulatory compliance. The company is explicitly committed to not providing any AI solutions that use practices prohibited by the AI Act. In particular, Polaria Tech prohibits any use of prohibited AI techniques listed in article 5 of the Regulation, such as:
-Subliminal, manipulative, or deceptive techniques aimed at altering a person's behavior without their knowledge;
-The exploitation of individuals' vulnerabilities (due to their age, disability, socio-economic situation, etc.);
-The social rating systems of people based on their behavior or personal characteristics;
-Predictive systems that assess the risk of a person committing an offence (police prediction);
-Unjustified mass biometric surveillance or any other practice that is expressly prohibited by law.
Polaria Tech does not develop or market any system that falls into these unacceptable risk categories. This commitment ensures that our AI solutions respect current fundamental rights and ethical principles, in accordance with section 5 of the AI Act.
The European Regulation takes an approach based on the level of risk presented by AI systems. Polaria Tech therefore distinguishes between two compliance scenarios according to the classification of its solutions: on the one hand, potentially high-risk systems, and on the other hand, limited risk systems (a category where our chatbots are mainly located). Each level of risk is subject to specific measures on our part to ensure compliance.
As a matter of principle, Polaria Tech does not offer systems that fall into the category of high-risk AIs as defined by the AI Act (for example, AIs used in recruitment, credit evaluation, health, or other areas listed in Annex III of the Regulation). However, should Polaria Tech develop or provide a solution classified as high risk in the future, all required precautions and regulatory measures would be rigorously implemented.
Concretely, Polaria Tech is committed to applying all the obligations provided for by the AI Act for high-risk AIS. This includes in particular: the establishment of an appropriate risk management system, the establishment of comprehensive technical documentation, the establishment of human control systems when required, regular tests of accuracy, robustness and cybersecurity, and compliance with conformity assessment procedures before marketing. These measures ensure that the system meets all legal and technical requirements applicable to its high risk level. Polaria Tech will also ensure that users or deployers of such systems are provided with all the information necessary for a compliant and safe use.
Most of the AI solutions offered by Polaria Tech (in particular our AI chatbots) fall under limited risk within the meaning of the AI Act. These systems do not present a high risk to security or fundamental rights, and are therefore not subject to the strict obligations and prior assessments imposed on high-risk AIs. On the other hand, they must comply with certain specific transparency obligations provided for by the regulations for AI systems with a limited level of risk.
Polaria Tech recognizes that, legally, limited-risk AI systems only involve transparency obligations (minimal-risk AI systems have no specific obligations). As a result, the company ensures that all transparency requirements applicable to its chatbots and other low-risk AIs are fully met. This compliance effort aims to clearly inform users and ensure responsible use of our technologies, even when they present only a moderate risk.
(NB: The “Right to Explanation” section for automated decisions has been removed from our policy, as this principle is not required by the AI Act for limited-risk AI systems.)
In accordance with the Regulation (Chapter IV on Transparency) and our ethical commitment, Polaria Tech is implementing all the transparency measures required for AI systems intended to interact with the public or generate content. These measures aim to ensure that users are always aware that they are interacting with an AI or that they are consuming AI-produced content, in order to maintain trust and clarity. The main transparency requirements applied by Polaria Tech are as follows:
-Identification of AI during interactions: For any Polaria Tech chatbot or virtual assistant that interacts directly with natural persons, we make sure to explicitly inform the user that it is an AI system (for example via a message or a visual indication in the interface). At no time should the user believe that he is interacting with a human without being notified. (According to section 50 (1) of the AI Act, this information may not be repeated if dealing with an AI is obvious to a normally attentive user in the given context of use.)
-Tagging generated content: Polaria Tech's AI systems that can produce audio, video, image, or text content incorporate tagging mechanisms. Concretely, any content generated or manipulated by our AI is accompanied by an indicator (such as a digital watermark or a mention in the metadata) indicating its artificial origin. These markings are designed in a machine-readable format so that they can be detected automatically, and they aim to clearly alert human users that the content was generated by artificial intelligence. For example, an image created by our AI will include an invisible tag informing that it is a synthetic image. Polaria Tech thus complies with the obligation of transparency on synthetic content provided for in article 50 (2) of the AI Act.
-Reliability, robustness and interoperability: Polaria Tech makes a point of ensuring the reliability and robustness of its AI systems, as well as the interoperability of the technical transparency solutions it deploys. The marking and information mechanisms mentioned above are developed according to the state of the art, in order to be effective in various technical environments. We ensure that these solutions remain effective, interoperable, robust and reliable, as far as technically possible, regardless of the type of content generated and the implementation constraints. This ongoing effort aims to ensure that our transparency obligations are optimally fulfilled without affecting the user experience, and that our systems inspire confidence through their quality and compliance.
Polaria Tech has implemented internal procedures to ensure daily compliance with this compliance policy. The team in charge of AI governance carries out active regulatory monitoring in order to anticipate changes in the AI Act and associated standards. In the event of an update of the legal framework or clarification of requirements (for example via guidelines from the European Commission), Polaria Tech will adapt its practices without delay and update this policy accordingly.
Internal training is provided to developers, project managers and other relevant collaborators to make them aware of specific obligations (in particular transparency obligations and the prohibition of certain practices). In addition, each new AI project is subject to a conformity assessment during the design phase in order to identify its level of risk and to apply the appropriate measures from the start.
Finally, Polaria Tech maintains transparency with respect to its customers and users on its compliance procedures. On request, the company can provide additional information on how a particular AI system meets the requirements of the AI Act (for example, description of built-in transparency mechanisms, synthetic technical documentation, etc.). This open approach reflects our commitment to combining innovation in artificial intelligence and compliance with the regulatory and ethical framework in force.
Polaria Tech remains available for any questions regarding this AI Act compliance policy. The company reaffirms that regulatory compliance and user trust are at the heart of its mission in developing responsible artificial intelligence solutions.