The Ministry of Digital Affairs Is Committed to Promoting Trustworthy AI by Organizing Deliberative Democratic Activities and Inviting International AI Companies for Opinion Exchange
The Ministry of Digital Affairs (moda) stated that with the rapid development of artificial intelligence (AI), the moda continues to establish a trustworthy AI development environment that balances technological innovation and risk governance. The moda is promoting various AI policies and initiatives, including establishing an AI assessment center, issuing AI assessment guidelines, and becoming a partner of the international non-governmental organization(NGO) "Collective Intelligence Project (CIP)." This year, the moda has further organized the "Deliberative Democracy Activities on Applying AI to Promote Information Integrity" and invited leading international AI vendors for opinions exchanges. Companies such as Meta, Google, Microsoft, and OpenAI, among others, all agree that AI must have important characteristics such as trustworthiness, accuracy, and security. The moda will continue to engage in dialogues with various sectors, striving to promote trustworthy AI and collaborating to create AI application services that meet the needs of society.
The moda stated that to ensure that AI applications align with the interests of the public, it officially became a partner of the international NGO "Collective Intelligence Project" in May last year. The moda participated in alignment conferences to assist our country in gathering consensus among the public on the demand and risks of artificial intelligence in the global public domain, collectively addressing the "AI alignment problem." The moda is committed to promoting trustworthy AI and a trusted digital environment. To ensure the reliability of AI applications and to impede the dissemination of erroneous, false, and forged messages and images generated by generative AI, the moda refers to international policies, standards, and industry requirements and has therefore established an AI assessment center, released AI assessment guidelines, and is currently conducting assessments for large language models (LLMs). The moda is gradually establishing verification institutions and laboratories to promote AI assessment and verification and establish international exchange and cooperation.
The moda organized two deliberative workshops last year on "AI Democratization" to explore the societal impacts of AI development from individual, stakeholder community, and national perspectives, inviting relevant stakeholders to discuss how to respond to the development of generative AI. Through discussions and feedback from participants, it became clear that addressing the diverse societal expectations regarding AI ethics is necessary, from data collection and processing to data utilization.
To further respond to the expectations of various sectors for the proper application of AI in public governance, the moda collaborated with the Institute of Science Technology and Society of National Yang Ming Chiao Tung University, Industrial Technology Research Institute, and the Deliberation Democracy Center, Stanford University and organized the "Utilizing AI to Promote Information Integrity Deliberative Democracy Activity" held on March 23rd. Through the government's dedicated" 111" short code SMS platform, the moda sent random text messages inviting citizens to participate. A total of 450 experts, scholars, citizens, communities, and digital professionals were invited to participate in online discussions on topics such as applying AI to identify and analyze the integrity of information. The event received widespread attention, indicating a high public expectation for using AI to promote information integrity.
On April 17th, the moda convened leading international AI vendors to discuss strengthening platform analysis and identification mechanisms and submitting language models (LLMs) to the AI assessment center. Participating vendors included Meta, Google, Microsoft, and OpenAI; all recognized the importance of AI in possessing trustworthy, accurate, and secure characteristics. They also noted that during the development of language models, their teams are responsible for continuously testing and training the models to meet the expectations of reliable AI systems. As for developing language models for evaluation by the AI Product and System Evaluation Center (AIEC), participating vendors expressed a positive attitude towards it, among which Meta and OpenAI indicated preliminary agreement during the meeting.
Furthermore, international vendors presented established or upcoming practices during the meeting to analyze and identify generative AI content. These practices include SynthID technology (used to watermark AI-generated images or detect watermarks within AI-generated pictures for easier identification of AI-generated content), AI-generated content detection technology, traceability labeling, signatures, user reporting mechanisms, and alerting users to AI-generated content.
Based on the feedback gathered from both the public and international AI vendors from this event, the moda will continue to urge online platform operators to promote information identification and analysis mechanisms. Additionally, suppose users post or broadcast content on online platforms, including personal images generated by deepfake technology or artificial intelligence. In that case, it should be indicated or labeled to ensure information integrity.
The moda emphasizes that collaborative efforts from platforms, governments, and the public are essential to address the challenges to information integrity posed by AI technology. Establishing transparent regulatory mechanisms and enhancing the public's ability to recognize AI-generated content can facilitate the promotion of information authenticity and integrity. Democratic deliberation will also strengthen societal resilience. In the future, the moda will continue to establish a trustworthy AI development environment that balances technological innovation and risk governance. This environment includes optimizing and retaining talent, prioritizing AI ethics and legislation, promoting data governance and circulation, and constructing a human-centered and robust digital ecosystem.