Principles for responsible artificial intelligence in Telenor
Purpose and background
AI has the potential to transform industries and the way we interact with technology, but at the same time it poses challenges and risks, in domains such as privacy, security, accountability and through biased outcomes. It is essential that our use of AI is aligned with human values and norms, and that it recognizes the rights and interests of all stakeholders.
At Telenor we are committed to using AI technologies in a way that is lawful, ethical, trustworthy, and beneficial for our customers, our employees and society in general. Telenor has therefore defined a set of guiding principles to support the responsible development and use of AI in a consistent way across our companies. Along with Telenor’s Governing Documents, such as Telenor internal policies, the principles shall guide our employees and everyone acting on behalf of Telenor in the development and use of AI to ensure this is aligned with our Responsible Business goals.
The fundament of Telenor’s principles for responsible AI are existing laws, relevant international standards, and internal policies, such as privacy and security. Telenor’s AI principles seek to highlight our commitments in these areas, as they relate to AI. The application of the principles shall follow a risk-based and context-aware approach, considering and weighting both the risks and benefits to all stakeholders.
Our principles are technology-agnostic and meant to cover both current and future uses of AI. However, we acknowledge that working responsibly with AI is a continuous effort and that we will need to evolve our practices over time.
We recognize that skilled employees are fundamental in securing responsible use and development of AI. Telenor promotes the awareness and training of our employees on these principles, including their responsibilities when developing or using AI systems.
Trust and engagement in our organization is essential when exploring AI technologies. At Telenor, we acknowledge the importance to involve employees and their representatives before introducing new working tools that affects their working conditions and/or involves the use of employee data.
The principles are the starting point of our AI governance framework, and form the basis of internal guidelines and operational playbooks that are tailored to specific AI technologies or use-cases, such as Generative AI.
Guiding principles
1 HUMAN VALUES, RIGHTS AND PRIVACY
We promote, develop, and use AI systems that respect human values, emphasizing human dignity, well-being and societal benefits. This includes the use of AI to drive our social and digital inclusion goals, and the use of AI systems by others enabled by Telenor.
We respect and promote the rule of law, human rights, democratic values and diversity throughout the use and development of AI systems within our ecosystem to drive a positive, long-term change for society. We recognize the need to safeguard personal data and privacy in our development and use of AI.
We will seek to use the potential of AI to deliver solutions that will contribute positively to the environment and Telenor’s net-zero target.
2 FAIRNESS AND NON-DISCRIMINATION
We recognize that AI systems can provide biased, unfair outcomes, posing a risk of discrimination based on personal characteristics such as race, ethnic origin, religion, gender, sexual orientation, or disabilities.
Development and use of AI should be designed to address biases, be based on high-quality data and promote fair, non-discriminatory and equal treatment and opportunities for all.
3 DATA GOVERNANCE
We recognize that using AI responsibly requires the use of data with high quality and integrity. We apply appropriate data governance and data management practices to AI systems throughout their lifecycle, from idea to retirement.
We assess and monitor the quality of the data used by AI, including any limitations of the training data, the lawfulness of input data, and the suitability of the resulting output data.
4 SAFETY AND SECURITY
We prioritize safety and security as an integrated part of our development and use of AI, establishing safeguards for AI systems to perform reliably and accurately for their intended purpose, and to manage risks to our customers, employees, and society.
We continuously evolve our safety and security safeguards following changes in the threat landscape, to secure the confidentiality, integrity, and resilience of our AI systems and data. We especially acknowledge the need to safeguard children, youth and vulnerable groups.
5 TRANSPARENCY AND EXPLAINABILITY
We provide clear and understandable information on our use and development of AI systems to all stakeholders that are affected by them. We make stakeholders aware of their interactions with our AI systems, including in the workplace.
We promote openness, transparency and explainability on how we use AI systems to make decisions. We particularly recognize the need to provide explanations when AI systems make decisions that impact the fundamental rights of individuals or groups of individuals. We provide information about how those systems generate a specific output or arrive at a decision.
6 HUMAN OVERSIGHT AND CONTROL
We acknowledge that human oversight and control is needed to govern our development and use of AI, and we monitor and supervise our use of AI to ensure it adheres to the principles in this document.
Where AI-driven decisions would significantly impact human values, the final decision power and accountability must lie with a human with appropriate training.
Key terms:
There is no common terminology and definitions of AI and related concepts, but the following list of key terms are meant to guide necessary understanding in the context of this document.
Stakeholders: Stakeholders encompass all organizations and individuals involved in, or affected by, AI systems, directly or indirectly.
Human rights and values: Human rights and values include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.
AI: A broad term used to describe an engineered system where machines learn from experience, adjusting to new inputs, and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior in computers. It may include automated decision-making.
AI system: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments.
AI governance: A system of policies, practices and processes organizations implement to manage and oversee their use of AI technology and associated risks to ensure the AI aligns with an organization's objectives, is developed and used responsibly and ethically, and complies with applicable legal requirements.
Responsible AI: Principle-based AI development and governance (see also AI governance), including the principles of security, safety, transparency, explainability, accountability, privacy, non-discrimination/non-bias, among others.
Descriptions above were created using the following references:
OECD AI principles, OECD, last amended 2023
Key Terms for AI Governance, IAPP, 2023