At a time when artificial intelligence is rapidly moving from research environments into the very core of the economy, public administration, and everyday life, the International AI Standards Summit 2025 offered something truly rare: a global forum dedicated to practical and responsible action.
Held on 2 and 3 December 2025 in the Republic of Korea, the Summit brought together the international community around a shared conviction that the future of artificial intelligence must be shaped thoughtfully, responsibly, and through collaboration. The event was organized by the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the International Telecommunication Union (ITU), and hosted by the Korean Agency for Technology and Standards (KATS) – the national standards body of the Republic of Korea, operating under the Ministry of Trade, Industry and Energy.
From the very beginning, the atmosphere was exceptional. The organization was of the highest standard, the sessions were clearly structured, the logistics impeccable, and the overall environment highly conducive to meaningful discussion. The Summit unfolded in a calm, respectful, and focused atmosphere, allowing complex issues to be examined in depth rather than haste – an exceptional value in today’s global conversations on AI. The event brought together more than 300 participants from 65 countries, including representatives from Serbia, among them the Acting Director of the Institute for Standardization of Serbia, Tatjana Bojanić. Participants came from national governments and regulatory authorities, international and national standards organizations, leading technology and industrial companies, academic and research institutions, civil society and human rights organizations, as well as United Nations agencies, representing both the public and private sectors. This diversity was not symbolic, but essential. Artificial intelligence is no longer a purely technical domain; it is shaping markets, work, governance, culture, and human relationships. The presence of such a broad and balanced community clearly demonstrated a shared understanding that no single country, sector, or discipline can define the future of AI alone. Only inclusive collaboration can lead to outcomes that are trustworthy, sustainable, and globally relevant.
The Summit was officially opened on 2 December 2025 with a welcoming address by the Prime Minister of the Republic of Korea, Kim Min-seok. In his remarks, he emphasized that the development and deployment of artificial intelligence must be guided in ways that strengthen economic growth, social cohesion, and international stability, rather than produce fragmentation and insecurity, thereby setting a clear political and ethical framework for the Summit. The presence of the Prime Minister at the opening underscored the strategic importance that the Republic of Korea attaches to artificial intelligence, standardization, and global cooperation, as well as its recognition of standards as a key instrument for steering technological progress in the public interest.
This was followed by a key moment of the Summit, when the heads of the international standards organizations officially presented the Seoul Declaration on Artificial Intelligence and International Standards. In his address, Dr. Sung Hwan Cho, President of ISO, emphasized that the governance of artificial intelligence cannot be reduced solely to technical performance or regulatory compliance. Responsible use of AI must encompass social and technical dimensions, environmental and societal impacts, and respect for human rights throughout the entire AI life cycle – from design and development to deployment and oversight. He underlined that the Summit was initiated out of the need to build capacity: to empower societies worldwide to apply AI in a responsible and inclusive manner, in close cooperation with civil society and the academic community, as well as with governments and industry. Jo Cops, President of IEC, broadened the perspective, stressing that AI is fundamentally a question of how societies choose to live. Technology is never neutral; it reflects values and priorities. Standardization occupies a unique position: it neither develops nor sells AI systems, but shapes the conditions under which they become safe, reliable, and socially acceptable. Tomas Lamanauskas, Deputy Secretary-General of ITU, highlighted the need to avoid fragmentation and uncertainty, particularly in developing countries. Human rights, safety, and security must remain priorities. He pointed to ITU’s concrete contribution: approximately 70 AI-related standards already published, more than 200 under development, and over 700 related initiatives, developed in close cooperation with the United Nations system and with a strong ethical focus.
The plenary sessions that followed laid the intellectual foundation of the Summit. Key speakers from international organizations, governments, and standards bodies examined the factors shaping global AI governance, repeatedly returning to one central insight: there are no simple solutions. Technology is advancing faster than institutions, and governance mechanisms must therefore be both robust and adaptable. In this context, international standards were repeatedly recognized as the most appropriate instrument to address this challenge.
During the discussions held as part of the Summit, participants had the opportunity to move beyond general principles and engage in an in-depth examination of complex issues. AI trust governance was explored, with particular attention to the challenges of defining safety without undermining system robustness, while interoperability emerged as both a technical and governance challenge of fundamental importance for accountability and global coherence. Special attention was given to consumer protection and market trust, where quality was identified as the most critical factor, and conformity assessment, testing, and verification were highlighted as the key link between innovation and responsible market deployment. Participants also addressed the application of AI in a broader context, including the authenticity and integrity of AI-generated content and sectoral impacts. Agriculture stood out as one of the most promising areas of application, while discussions on sustainability pointed to the need for a critical assessment of actual environmental and social impacts. A significant portion of the dialogue focused on building national AI governance infrastructure through sectoral discussions in telecommunications, finance, and energy. These covered consensus-building under conditions of uncertainty, the integration of AI and energy systems, commercialization, and the importance of shared terminology. Across all areas, data were recognized as a foundational element, with repeated emphasis on the need for common formats and governance frameworks as a prerequisite for building and maintaining trust.
One of the most significant sessions focused on the practical aspects of building global trust in artificial intelligence through standards. Wael William Diab, Chair of the ISO/IEC JTC 1/SC 42 Committee on Artificial Intelligence, explained the development of AI standards that serve both as implementation guidance and as a basis for certification in different contexts, highlighting ISO/IEC 42001 in particular as an example of timely and proactive action. Representatives of TÜV pointed to the unique risks posed by AI and the need for context-specific approaches to testing and certification, while also emphasizing the importance of an integrated quality assurance system throughout the entire value chain. Dr. Jean Innes, Chief Executive Officer of the Alan Turing Institute, stressed the importance of strong links between research, policymaking, and standardization to ensure that standards remain scientifically grounded and responsive to rapid technological change. From the perspective of developing countries, Esther Kunda, Director General for Innovation and Emerging Technologies at Rwanda’s Ministry of ICT and Innovation, highlighted the importance of a sectoral approach to AI deployment for human development, particularly in agriculture, with international standards serving as key tools for capacity building and trust.
The key highlight of the Summit was the presentation of the Seoul Declaration on Artificial Intelligence and International Standards, which affirms that artificial intelligence represents a powerful opportunity to enhance the well-being of humanity, while at the same time requiring a commitment to its inclusive and responsible governance. Through this Declaration, ISO, IEC, and ITU committed to integrating socio-technical dimensions into standards development, embedding human rights throughout the entire AI life cycle, strengthening an inclusive community, and enhancing public–private cooperation in capacity building. The significance of the International AI Standards Summit 2025 lies not only in its scale, organization, and diversity of participants, but above all in the fact that standardization assumed a leading role at a critical moment, offering a neutral, consensus-driven space in which governments, industry, the scientific community, and civil society can jointly shape the conditions under which AI operates. In doing so, ISO, IEC, and ITU positioned international standards as the foundation of global trust in artificial intelligence and as a means of translating shared values into practical frameworks that ensure the AI revolution unfolds thoughtfully, cooperatively, and in the service of humanity.
Prepared by: Tatjana Bojanić