Messages from Vilnius

17 – 19 June 2024

Following are the first messages on global Internet governance processes. Messages from the sessions are yet to be finalised and will be published soon here.

 

Global Internet governance processes

On the Global Digital Compact

EuroDIG looks forward to the opportunities to amplify the purpose and objectives of the Global Digital Compact (GDC) following the Summit of the Future. EuroDIG offers to play its part in contributing to and monitoring progress in the implementation of GDC commitments.

EuroDIG strongly supports the commitment in the Rev.1 draft text of the GDC to strengthen the multistakeholder model of governance and advises against any watering down of this commitment in the finalisation of the text.

EuroDIG believes that the GDC process should build on the strong foundations and accomplishments of the WSIS instead of creating new mechanisms.

EuroDIG supports enhancing the role of the UN IGF.

EuroDIG recommends simplifying the process of GDC follow up and review of the implementation of the GDC commitments, with a substantive role for the Internet Governance Forum and the WSIS Forum so that non-governmental stakeholders can fully contribute to the process.

More details on the wiki.

On the WSIS+20 Review

EuroDIG recognises the positive linkages between the WSIS+20 Review, the Global Digital Compact and the 2030 Agenda for Sustainable Development.

EuroDIG believes there needs to be a comprehensive understanding of the process of the WSIS+20 review.

The review should not undermine the achievements of both the 2003 and the 2005 phases of the WSIS. It should instead be primarily an opportunity to enhance the processes of Internet governance, global digital policy and cooperation, by using the Sao Paulo Guidelines as an inspiration for making processes more inclusive, transparent and accountable.

More details on the wiki.

 

Main Topics

Main Topic 1: European policies and strategies

Subtopic 1 | Human Rights in the Digital Era, Europe’s Role in Safeguarding Human Rights Online
There is an urgent need to ensure effective implementation of human rights frameworks online and to address the lack of awareness of these frameworks. A concerted multistakeholder effort is crucial to a) help develop a clear and robust legal framework, b) ensure a proactive role of civil society in influencing discourse, shaping and monitoring implementation, and c) emphasize the social responsibility of the private sector and its duty to respect the rights of users.

Subtopic 2 | One for All, all for one: The Role of cooperation in Enhancing Cyber Resilience in Europe
Concerted and coordinated efforts must be built on trust, cross-sectoral collaboration and international cooperation because they are vital to address cybersecurity challenges.These should include mechanisms for cyber cooperation in critical situations such as wartime. This requires training and education, as well as inclusive cybersecurity measures that cater to all segments of society.

Subtopic 3 | You on Signal and Me on Telegram – Messenger Interoperability by EU Regulation
Interoperability relies on the technical – protocol – interoperability, which is being addressed through standardisation in the Internet Engineering Task Force (IETF), as well on the operational and economic will to connect and exchange. It is vital to develop and refine mechanisms of market evaluation, enhance user choice, maintain end-to-end encryption and privacy across different platforms. The extraterritorial implications of the Digital Markets Act (DMA) and the potential impact on users who rely on non-EU messaging services must be addressed.

Main Topic 2: GovTech – putting people first in digitalizing public services and the use of data

Subtopic 1 | GovTech Dynamics: Navigating Innovation and Challenges in Public Services.
Digital transformation in Public Administrations (PAs) comes with the need of stronger digital skills, which may come with mandatory training for civil servants. Challenges include digital skill gaps, data analysis capabilities, and regulatory barriers, requiring a shift towards enabling innovation. Cities and other PAs can innovate in collaboration with academia and the private sector with such projects as the GovTech Lab. These labs test new policies and technologies, fostering innovation through skill development and co-creation. Design thinking and user experience should prioritize simplicity and functionality. Cities can use open data dashboards to be more transparent to citizens by allowing them to easily visualize data about their living environment. Future challenges include digital identification, AI regulations, and ensuring technology accessibility for all, including senior citizens. Practical strategies and public co-creation are necessary for meaningful change.

Subtopic 2 | European approach on data governance
The new EU legislation on data is creating new scenarios. Despite it, the EU GDPR, the CoE Convention 108+, and the “privacy at all cost” approach are still central as the Digital Governance Act recognizes the prevalence of privacy legislation. Tension exists between the need to explore data and to open it for the PA to be transparent and the need to protect citizens’ right to privacy. The new EU legislation (Data Governance Act and Data Act) tries to strike a balance between the two. The European values enshrined in GDPR are being adopted elsewhere, both because of EU influence and because of recognition of their validity. Furthermore, the CoE Convention 108+ is open for signature by non-member states too.

Subtopic 3 | Empowering communities: partnerships for access to services
Digitalization became more and more relevant after Covid and the new climate change-related catastrophe. Digital instruments allow rescuers and PAs to quickly identify who is in need and where on a specific territory. Nonetheless, catastrophes also make the digital infrastructure vulnerable as disruption in communication can be caused by unusual weather events. Large amounts of the population still have no or little access to the Internet, which is particularly true for people living in low income and/or remote areas. This yields discrimination in access to services and opportunities. However, this digital divide can be bridged with relatively cheap connectivity infrastructure: examples exist of public-private partnerships that reduce costs for bringing connectivity to rural areas. On top of improved connectivity, services should be easily accessible with straightforward interfaces that require little expertise (accessibility by design).

Main Topic 3: Artificial Intelligence

Subtopic 1 | Innovation and ethical implication
The proliferation of AI-related initiatives and documents and the adoption of regulatory and human rights frameworks is key to fostering user’s trust in AI technologies, to tackle AI’s complexity and applications and to provide tailored solutions to the specific needs of the diverse stakeholders. A multistakeholder approach to AI governance is crucial to ensure that AI development and use are informed by a variety of perspectives to minimise bias and serve the interests of society. A pressing ethical concern is the military use of AI which is yet to be addressed by existing regulatory frameworks but will need more focused attention in the near future.

Subtopic 2 | The Framework Convention on AI and human rights, democracy and the rule of law
The CoE Framework Convention on AI and human rights, democracy and the rule of law is an important step towards a global approach to AI regulation. The CoE Framework and the EU AI Act compliment each other. Further steps should follow, by taking into account the need to address the growing issues of AI from a global, rather than a regional perspective.

Subtopic 3 | Identification of AI generated content
Current AI detection systems are unreliable or even arbitrary. They should not be used other than in an experimental context with a very high level of caution and particularly and not for assessing works of students. Without reliable AI detectors, we have to rely on education and critical assessment of content that takes into account that any content can easily be generated by AI. Watermarking and certification of origin should be a more reliable means to authenticate content and should be supported by regulation.

 

Workshops

Messages on the workshops are still finalised and will be published here soon.

 

YOUthDIG Messages

Policy Propositions on Artificial Intelligence and Human Rights

Discrimination in AI
Artificial Intelligence (AI) has the potential to reinforce and create new forms of discrimination. This stems from the inherent biases in data, which is far from neutral and often reflects existing societal prejudices and bias. To face these issues, we propose transparency as a key action. Implementing synthetic data and involving focus groups, particularly those representing minority and intersectional backgrounds, can ensure a more balanced and inclusive dataset, making AI systems more sensitive to diverse perspectives.

Moreover, there is a pressing need for legal clarification regarding responsibility for discrimination in AI. Clear guidelines and accountability measures must be established to effectively address and prevent bias.

Bias of Policy Makers Due to Techno-Solutionism
Policy makers sometimes fall into the trap of techno-solutionism, relying heavily on technological fixes without considering the broader social context. To counter this bias, an interdisciplinary approach is essential. By involving experts from various fields – technology, sociology, ethics, and law – more sustainable and holistic solutions can be achieved, including the professional background of multiple representatives.

A multistakeholder approach is also fundamental. Collaboration across different sectors and disciplines will ensure that diverse viewpoints are considered, which leads to more applicable and comprehensive policies. To facilitate this, we propose the creation of a dedicated body focused on intersectional and interdisciplinary collaboration. This body would meet regularly to assess ongoing issues and work towards continuous improvement through cooperative efforts.

AI in Border Control
The use of AI in border control raises significant human rights concerns, particularly for refugees and individuals crossing borders who are inherently vulnerable. The collection of biometric data often occurs without proper consent, exacerbating these concerns.

Furthermore, AI is not a solution to the migration crisis. In fact, the inherent biases and risks associated with AI could worsen discrimination and lead to unjust outcomes. Therefore, the use of AI in border control should be prohibited, prioritising human rights and ethical considerations above technological solutions. These policy propositions aim to address critical issues at the intersection of AI and human rights. By promoting transparency, accountability, and interdisciplinary collaboration, we can ensure the ethical and fair use of AI technologies.

Education
Our goal is to empower individuals through education, to enable them to assert their AI and digital rights and critically analyse technological solutions. We encourage the implementation of constructive and informative campaigns to raise awareness of AI impacts. Additionally, we advocate for the integration of AI literacy into school curriculums.

Data for AI training
AI programs are “trained” by being exposed to large quantities of existing works, photos, information, and data. Public awareness of how our data are used, especially in training new AI programs, is very low. Most people are unaware that their personal photographs are being used for AI training. Even when notifications are provided, they are often buried in general Terms and Conditions or presented in a way that users do not fully understand. Currently, users can opt-out, but as this issue grows, we propose that users must explicitly opt-in, following the GDPR example. Obtaining explicit consent for using personal content should become paramount.

Given the vast scope of the data in question and the economic interests of businesses, this issue should be standardized at the international or EU level. Precise recommendations and obligations should be imposed on companies, non-governmental organizations, governmental institutions, and all stakeholders. This regulation should ensure that consent is obtained from individuals whose biometric data are being used to train models that generate new content.

Technosolutionism
Using AI to solve problems may seem progressive, glamorous, and investment-worthy. However, AI might not be the most efficient way to solve a problem, for example, in public services from waste to migration management. In fact, it may even create new issues, as is expected with online child safety measures, such as biometric age verification or client-side scanning.

We urge integrating a comprehensive, multistakeholder impact assessment and analysis of both actual and potential checks and balances before implementing AI as a problem-solving tool, be it in digital policy or for practical issues. We urge policymakers to carefully consider this impact assessment, provide justification for their decisions, and be held accountable based on the assessment and associated risks and costs.

Deepfakes
Deepfake videos are increasingly common in the media, especially during crises and elections. This misinformation prevents rational decision-making, increases suspicion in institutions, and harms democracy. To combat this, Europe needs a legal framework for deepfake usage, funding for detection technologies, and mandatory labeling for all deepfakes to ensure transparency.

We propose a system to confirm the authenticity of information, such as a “badge of authenticity” using a QR code or blue tick circle. Media houses could use this system to verify content. Additionally, educating citizens on recognizing misinformation and working with technology companies would strengthen this effort. These measures will help protect society from the damaging effects of fake news and deepfakes, ensuring a more informed and democratic populace.

An intersectional approach for youth participation for a better Internet governance future

Representation:

  • The creation of spaces with co-management structures for youth to provide policy recommendations while monitoring the implementation of these actions.
  • Promoting a requirement for young people to have a ‘seat at the table’ by developing standards and recommendations to encourage the fostering of meaningful participation and inclusion in high level discussions.
  • Increasing the stream of additional stable funding and income for youth organizations (including youth councils), such as operational grants, thus, assessing and monitoring successful implementation.

Education:

  • Strengthen both non-formal and formal education via collaborative efforts amongst the relevant stakeholders such as young people; decision makers; experts; academia and many more by working towards and understanding how and why education needs tailored to specific groups through the following topics: youth participation, digital literacy (including how bias is present in the online sphere), inclusivity and accessibility, and finally, through critical thinking.
  • Recognise the current work of non-formal education agents on these topics for the creation of an organized European curriculum that can be implemented in formal education.
  • This educational program would follow the ‘youth for youth’ principle, where the young people would be, within the participatory model, included in all aspects of design and development processes, adopting an intersectional approach.

Inclusivity:

  • Inclusivity starts with language, and we should ensure that all policies and regulations are transparent and comprehensible to everyone, specifically the youth; limiting things like overuse of technical jargon. Policy briefs and documents need to have a youth-friendly version which can explain things transparently, including how these policies affect everyday life.
  • An intercultural aspect is also crucial for an inclusive environment, therefore LLMs should be trained in different (European) languages to reduce inequalities in access to information and knowledge. To ensure accuracy, this needs to have some level of human verification wherever possible. This could be achieved, for example, in partnership with local universities.
  • Apply standards that make online spheres more accessible, demanding all websites to have accessible features such as services for people with disabilities (visual and hearing impairments) to ensure that access to all content is equal.

For the successful development of the points above, it is crucial to have secured stable funding and adopt an intersectional approach to ensure no one is left behind.

Fair and Privacy Preserving Use of Data

Data Privacy shouldn’t be subject to one’s personal, social and economic status.

  • Meaning that: Information about data collection and data use must be presented to users transparently, in simple language and easy to understand.
  • A crackdown on dark patterns – outlawing dark patterns and empower consumer agencies to identify.
  • Creating a standard of having to opt in for sharing data without penalty or exclusion of the users.

In the long term, we demand to take the burden away from users in terms of privacy experience and shifting power back to them, allowing users to decide what information to be shared about themselves.

New Economics of Data
We acknowledge that Data is an asset, the product of users labour and it is used as a commodity to facilitate price discrimination. Therefore, we advocate for preventing companies from increasing prices of products and services based on users’ personal data shared without their consent

Age verification
We are seriously concerned about children’s welfare, and we acknowledge that an effective technical solution to protect children online without infringing on privacy has yet to be discovered.

Entities who wish to improve children’s welfare online should not expand on privacy reducing technocentric solutions, but prioritize:

  • Strengthening law enforcement financially, educationally and structurally.
  • Shifting liability to providers of explicit and mature rated content.

Find the Messages from previous years in our archive.

More information on the wiki

Downloads 2024