Messages from Vilnius

17 – 19 June 2024

Download your digital copy of the “Messages from Vilnius“!

 

Global Internet governance processes

On the Global Digital Compact

EuroDIG looks forward to the opportunities to amplify the purpose and objectives of the Global Digital Compact (GDC) following the Summit of the Future. EuroDIG offers to play its part in contributing to and monitoring progress in the implementation of GDC commitments.

EuroDIG strongly supports the commitment in the Rev.1 draft text of the GDC to strengthen the multistakeholder model of governance and advises against any watering down of this commitment in the finalisation of the text.

EuroDIG believes that the GDC process should build on the strong foundations and accomplishments of the WSIS instead of creating new mechanisms.

EuroDIG supports enhancing the role of the UN IGF.

EuroDIG recommends simplifying the process of GDC follow up and review of the implementation of the GDC commitments, with a substantive role for the Internet Governance Forum and the WSIS Forum so that non-governmental stakeholders can fully contribute to the process.

More details on the wiki.

On the WSIS+20 Review

EuroDIG recognises the positive linkages between the WSIS+20 Review, the Global Digital Compact and the 2030 Agenda for Sustainable Development.

EuroDIG believes there needs to be a comprehensive understanding of the process of the WSIS+20 review.

The review should not undermine the achievements of both the 2003 and the 2005 phases of the WSIS. It should instead be primarily an opportunity to enhance the processes of Internet governance, global digital policy and cooperation, by using the Sao Paulo Guidelines as an inspiration for making processes more inclusive, transparent and accountable.

More details on the wiki.

 

Main Topics

Main Topic 1: European policies and strategies

Subtopic 1 | Human Rights in the Digital Era, Europe’s Role in Safeguarding Human Rights Online
There is an urgent need to ensure effective implementation of human rights frameworks online and to address the lack of awareness of these frameworks. A concerted multistakeholder effort is crucial to a) help develop a clear and robust legal framework, b) ensure a proactive role of civil society in influencing discourse, shaping and monitoring implementation, and c) emphasize the social responsibility of the private sector and its duty to respect the rights of users.

Subtopic 2 | One for All, all for one: The Role of cooperation in Enhancing Cyber Resilience in Europe
Concerted and coordinated efforts must be built on trust, cross-sectoral collaboration and international cooperation because they are vital to address cybersecurity challenges.These should include mechanisms for cyber cooperation in critical situations such as wartime. This requires training and education, as well as inclusive cybersecurity measures that cater to all segments of society.

Subtopic 3 | You on Signal and Me on Telegram – Messenger Interoperability by EU Regulation
Interoperability relies on the technical – protocol – interoperability, which is being addressed through standardisation in the Internet Engineering Task Force (IETF), as well on the operational and economic will to connect and exchange. It is vital to develop and refine mechanisms of market evaluation, enhance user choice, maintain end-to-end encryption and privacy across different platforms. The extraterritorial implications of the Digital Markets Act (DMA) and the potential impact on users who rely on non-EU messaging services must be addressed.

Main Topic 2: GovTech – putting people first in digitalizing public services and the use of data

Subtopic 1 | GovTech Dynamics: Navigating Innovation and Challenges in Public Services.
Digital transformation in Public Administrations (PAs) comes with the need of stronger digital skills, which may come with mandatory training for civil servants. Challenges include digital skill gaps, data analysis capabilities, and regulatory barriers, requiring a shift towards enabling innovation. Cities and other PAs can innovate in collaboration with academia and the private sector with such projects as the GovTech Lab. These labs test new policies and technologies, fostering innovation through skill development and co-creation. Design thinking and user experience should prioritize simplicity and functionality. Cities can use open data dashboards to be more transparent to citizens by allowing them to easily visualize data about their living environment. Future challenges include digital identification, AI regulations, and ensuring technology accessibility for all, including senior citizens. Practical strategies and public co-creation are necessary for meaningful change.

Subtopic 2 | European approach on data governance
The new EU legislation on data is creating new scenarios. Despite it, the EU GDPR, the CoE Convention 108+, and the “privacy at all cost” approach are still central as the Digital Governance Act recognizes the prevalence of privacy legislation. Tension exists between the need to explore data and to open it for the PA to be transparent and the need to protect citizens’ right to privacy. The new EU legislation (Data Governance Act and Data Act) tries to strike a balance between the two. The European values enshrined in GDPR are being adopted elsewhere, both because of EU influence and because of recognition of their validity. Furthermore, the CoE Convention 108+ is open for signature by non-member states too.

Subtopic 3 | Empowering communities: partnerships for access to services
Digitalization became more and more relevant after Covid and the new climate change-related catastrophe. Digital instruments allow rescuers and PAs to quickly identify who is in need and where on a specific territory. Nonetheless, catastrophes also make the digital infrastructure vulnerable as disruption in communication can be caused by unusual weather events. Large amounts of the population still have no or little access to the Internet, which is particularly true for people living in low income and/or remote areas. This yields discrimination in access to services and opportunities. However, this digital divide can be bridged with relatively cheap connectivity infrastructure: examples exist of public-private partnerships that reduce costs for bringing connectivity to rural areas. On top of improved connectivity, services should be easily accessible with straightforward interfaces that require little expertise (accessibility by design).

Main Topic 3: Artificial Intelligence

Subtopic 1 | Innovation and ethical implication
The proliferation of AI-related initiatives and documents and the adoption of regulatory and human rights frameworks is key to fostering user’s trust in AI technologies, to tackle AI’s complexity and applications and to provide tailored solutions to the specific needs of the diverse stakeholders. A multistakeholder approach to AI governance is crucial to ensure that AI development and use are informed by a variety of perspectives to minimise bias and serve the interests of society. A pressing ethical concern is the military use of AI which is yet to be addressed by existing regulatory frameworks but will need more focused attention in the near future.

Subtopic 2 | The Framework Convention on AI and human rights, democracy and the rule of law
The CoE Framework Convention on AI and human rights, democracy and the rule of law is an important step towards a global approach to AI regulation. The CoE Framework and the EU AI Act compliment each other. Further steps should follow, by taking into account the need to address the growing issues of AI from a global, rather than a regional perspective.

Subtopic 3 | Identification of AI generated content
Current AI detection systems are unreliable or even arbitrary. They should not be used other than in an experimental context with a very high level of caution and particularly not for assessing works of students. Without reliable AI detectors, we have to rely on education and critical assessment of content that takes into account that any content can easily be generated by AI. Watermarking and certification of origin should be a more reliable means to authenticate content and should be supported by regulation.

 

Workshops

Workshop 1a | Child Safety Online – Update on Legal Regulatory Trends Combatting Child Sexual Abuse Online

Rapporteur: Francesco Vecchi, Eumans

  1. Advancements in legal and regulatory measures on Child Sexual Abuse (CSA)
    Workshop 1a discussed three recent measures on the protection of children from online Child Sexual Abuse (CSA): the proposed EU CSA Regulation (CSAR), the new UK Online Safety Act, and the positive results from the Lithuanian Law on the Protection of Minors against detrimental effects of public information. An agreement was found on the need for better regulation in this field, emphasising the accountability of online service providers for monitoring illegal and harmful material and safeguarding minors.
  2. Major concerns and benefits
    CSA is currently increasing exponentially and has serious consequences for the rights and development of children. For this reason, recognising such depictions and preventing child sexual abuse should go hand in hand. Participants are concerned about the safety of users, including with regard to the potential use of technology. Breaches of confidential communication or anonymity are seen critically. At the same time, advantages are recognised in the regulations, e.g. with regard to problem awareness or safety by design approaches. Age verification procedures are perceived as both a risk and an advantage. However, this should not be at the expense of anonymity and participation.
  3. The interplay of privacy and safety
    The participants of Workshop 1a of EuroDIG believe privacy and safety are intertwined and inseparable, advocating that legal solutions to combat child sexual abuse online must strive to optimise both. These measures should be centred on children’s rights and their best interests, as a way forward to achieve this balance.

Workshop 1b | Protecting vulnerable groups online from harmful content – new (technical) approaches

Rapporteur: Francesco Vecchi, Eumans

  1. Type of content:
    Self-generated abusive material and pathological content are emerging as the most widespread harms to vulnerable groups online. All stakeholders are aware that measures and regulations must be taken to protect vulnerable groups. They are also aware that the rights and needs for protection against violence and abuse as well as privacy and participation must be guaranteed.
  2. Minimizing the Impact on Privacy, Inclusive, and Accessible Technical Approaches:
    Client-side scanning for detecting known CSAM online involves methods that can minimise the impact on privacy, learning nothing about the content of a message except whether an image matches known illegal content. Concerns are raised about anti-grooming techniques analysing visual and textual data, while the use of AI raises questions about proxies, bias, and accuracy. Effective task-based models that respect privacy require comprehensive and accurate data, especially the use of metadata. Authorities play a critical role in double-checking the effectiveness of these measures and privacy compliance. Looking ahead, data-saving and anonymity-preserving age verification mechanisms could be a future-proof solution for robust verification and privacy protection.
  3. Diversity and Multi-Stakeholder Philosophy:
    A diversified multi-stakeholder approach is required to ensure that solutions are comprehensive in addressing harmful online content. Significant weight should be given to civil society, including individuals from vulnerable groups, like minors, and non-technical backgrounds, should be involved in this process and their perspectives taken into account. Finally, EuroDIG’s Workshop 1b supports the direction, great importance, and urgency of a uniform legal framework such as the EU’s CSAR.
Workshop 2a | Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics

Rapporteur: Francesco Vecchi, Eumans

  1. Impact and Challenges of the EU Elections
    Disinformation campaigns before EU elections targeted issues like Ukraine, COVID-19, and the state of EU democracy, aiming to manipulate public opinion and polarize voters. While immediate election periods showed reduced incidents, AI and traditional methods play crucial roles in maintaining (or degrading) electoral integrity and ensuring (or threatening) access to verified political content. The measures put in place by the EU (through funding an independent organisation like EDMO, the Code of Practice on Disinformation, the EEAS, the European Parliament, and a network of fact-checkers) have succeeded in mitigating the impact of foreign interference. However, concerns remain about the spreading of mistrust in democratic institutions.
  2. Possible Solutions
    To combat disinformation, a multimethod approach includes independent fact-checking, international collaboration on research and demonetisation strategies, and holding digital platforms accountable [1]. The representative of Meta’s oversight board presented recommendations addressed to the platform about how to operate during elections. Educating users in critical thinking and media literacy, along with developing voter-friendly communication, enhances electoral transparency and promotes informed electoral participation. Besides, the long-term financial sustainability of relatable media is key to managing effective strategies.
  3. Multidimensional Approach
    Addressing media manipulation and electoral integrity requires enhanced cooperation between states, platforms, and civil society with a multidimensional approach involving diverse stakeholders, multidisciplinary expertise (e.g. psychosociology, neurology, linguistics, communications, etc.), multi-level governance (from international to local), and the development of inclusive multilingual standards.

[1] See the full document here.

Workshop 2b | Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics

Rapporteur: Francesco Vecchi, Eumans

  1. General Mistrust in Democratic Institutions
    In 2024, amid widespread distrust in democratic institutions globally, approximately 4 billion people engage in elections. Information (both digital and traditional) is increasingly crafted for entertainment, gamification, and political polarisation, amplified by Artificial Intelligence through propaganda, translation services, and micro-targeting. More specifically, social media platforms serve as crucial feedback and control channels for governments, particularly in the Global South.
  2. Diversified and Tailored Solutions
    To tackle these challenges, promoting media literacy in educational curricula is essential, fostering critical thinking and fact-checking skills. Creating a symbiotic relationship between stakeholders (taking proactive measures to combat disinformation) and users (encouraged to adopt critical thinking practices and rely on verified sources) strengthens resilience against misinformation. Besides, tailored solutions are crucial: e.g. Central-Eastern Europe frames disinformation geopolitically, African countries grapple with centralised power dynamics, and India faces issues with social media micro-profiling. Finally, empowering community leaders strengthens local resilience by leveraging their influence to promote accurate information.
  3. Focus on Inclusivity and Social Media
    An inclusive global approach to infrastructure development avoids biases and ensures equitable solutions across regions. Prioritising efforts on social media platforms, especially in the Global South where youth and mobile access are influential, enhances interventions against disinformation and supports transparent electoral processes.
Workshop 3 | Network Evolution: Challenges and Solutions

Rapporteur: Francesco Vecchi, Eumans

  1. Discussion of the EU Commission Draft White Paper
    The current state of network infrastructure and computing continuum is inadequate to provide universal access to the Internet in the EU, with weaknesses in the cloud and AI sectors. Financial pressures (e.g. investment challenges and low conversion rates), worsened by the disaggregation of hardware and software, threaten long-term technological advancement and digitalisation.
  2. Key Concerns
    The Draft White Paper focuses on creating a single digital market with secure infrastructure and sustainable competition. However there are significant market and financial pressures that would result. Therefore, there is a need to increase scalability, and pressure to improve network capabilities to meet the minimum capabilities noted. The only way this can be pursued is by significant investing in technology and improving efficiency through the implementation of new technology such as optical fibre, and smart networks.
  3. The Draft White Paper proposes an Integrated, Flexible, and Consumer-Centered Network Model
    Progressing the evolution of the network has significant challenges with the cost to replace current infrastructure, and improve network capabilities to offer speeds and access under the paper’s proposal. The imbalance created between incumbents and new network operators will also need to be addressed through regulator guidance in order to mitigate financial and technical harms. There is a need to continue the discussion to establish threshold criteria, and to investigate consumer needs and expectations as no one solution will meet everyone’s expectations.
Workshop 4 | Challenges and Opportunities: Emerging Technologies and Sustainability Impacts

Rapporteur: Francesco Vecchi, Eumans

  1. Interconnection of the Twin Transitions
    The digital and environmental transitions are interconnected and, together, can achieve the goal of reducing emissions by 2050. EuroDIG supports the EU’s mission to balance sustainability with privacy, security, safety, pluralism, and freedom of expression, recognising that these elements must coexist harmoniously in the pursuit of a greener future.
  2. Sustainable Digital Solutions
    Adopting a sustainability-by-design approach involves making technology inherently more sustainable while using digital solutions to promote sustainability. It is also essential to develop common indicators, guidelines, and standards, including the right to repair. In this sense, the focus of regulation should be shifted towards the Internet itself, rather than solely on infrastructure or products, by providing greener websites, protocols, and standards; and governance practices that prioritise environmental concerns. Emphasis should be given to sustainability practices for smartphones, including operating system diversity, genuine social media engagement, and Free and Open Source Software (FOSS).
  3. Human-centric Multistakeholder Approach
    In crafting recommendations and guidelines, a human-centric multistakeholder approach ensures that diverse perspectives are considered and that technological advancements serve the needs and values of individuals, fostering an inclusive and sustainable digital environment.

 

YOUthDIG Messages

Policy Propositions on Artificial Intelligence and Human Rights

Discrimination in AI
Artificial Intelligence (AI) has the potential to reinforce and create new forms of discrimination. This stems from the inherent biases in data, which is far from neutral and often reflects existing societal prejudices and bias. To face these issues, we propose transparency as a key action. Implementing synthetic data and involving focus groups, particularly those representing minority and intersectional backgrounds, can ensure a more balanced and inclusive dataset, making AI systems more sensitive to diverse perspectives.

Moreover, there is a pressing need for legal clarification regarding responsibility for discrimination in AI. Clear guidelines and accountability measures must be established to effectively address and prevent bias.

Bias of Policy Makers Due to Techno-Solutionism
Policy makers sometimes fall into the trap of techno-solutionism, relying heavily on technological fixes without considering the broader social context. To counter this bias, an interdisciplinary approach is essential. By involving experts from various fields – technology, sociology, ethics, and law – more sustainable and holistic solutions can be achieved, including the professional background of multiple representatives.

A multistakeholder approach is also fundamental. Collaboration across different sectors and disciplines will ensure that diverse viewpoints are considered, which leads to more applicable and comprehensive policies. To facilitate this, we propose the creation of a dedicated body focused on intersectional and interdisciplinary collaboration. This body would meet regularly to assess ongoing issues and work towards continuous improvement through cooperative efforts.

AI in Border Control
The use of AI in border control raises significant human rights concerns, particularly for refugees and individuals crossing borders who are inherently vulnerable. The collection of biometric data often occurs without proper consent, exacerbating these concerns.

Furthermore, AI is not a solution to the migration crisis. In fact, the inherent biases and risks associated with AI could worsen discrimination and lead to unjust outcomes. Therefore, the use of AI in border control should be prohibited, prioritising human rights and ethical considerations above technological solutions. These policy propositions aim to address critical issues at the intersection of AI and human rights. By promoting transparency, accountability, and interdisciplinary collaboration, we can ensure the ethical and fair use of AI technologies.

Education
Our goal is to empower individuals through education, to enable them to assert their AI and digital rights and critically analyse technological solutions. We encourage the implementation of constructive and informative campaigns to raise awareness of AI impacts. Additionally, we advocate for the integration of AI literacy into school curriculums.

Data for AI training
AI programs are “trained” by being exposed to large quantities of existing works, photos, information, and data. Public awareness of how our data are used, especially in training new AI programs, is very low. Most people are unaware that their personal photographs are being used for AI training. Even when notifications are provided, they are often buried in general Terms and Conditions or presented in a way that users do not fully understand. Currently, users can opt-out, but as this issue grows, we propose that users must explicitly opt-in, following the GDPR example. Obtaining explicit consent for using personal content should become paramount.

Given the vast scope of the data in question and the economic interests of businesses, this issue should be standardized at the international or EU level. Precise recommendations and obligations should be imposed on companies, non-governmental organizations, governmental institutions, and all stakeholders. This regulation should ensure that consent is obtained from individuals whose biometric data are being used to train models that generate new content.

Technosolutionism
Using AI to solve problems may seem progressive, glamorous, and investment-worthy. However, AI might not be the most efficient way to solve a problem, for example, in public services from waste to migration management. In fact, it may even create new issues, as is expected with online child safety measures, such as biometric age verification or client-side scanning.

We urge integrating a comprehensive, multistakeholder impact assessment and analysis of both actual and potential checks and balances before implementing AI as a problem-solving tool, be it in digital policy or for practical issues. We urge policymakers to carefully consider this impact assessment, provide justification for their decisions, and be held accountable based on the assessment and associated risks and costs.

Deepfakes
Deepfake videos are increasingly common in the media, especially during crises and elections. This misinformation prevents rational decision-making, increases suspicion in institutions, and harms democracy. To combat this, Europe needs a legal framework for deepfake usage, funding for detection technologies, and mandatory labeling for all deepfakes to ensure transparency.

We propose a system to confirm the authenticity of information, such as a “badge of authenticity” using a QR code or blue tick circle. Media houses could use this system to verify content. Additionally, educating citizens on recognizing misinformation and working with technology companies would strengthen this effort. These measures will help protect society from the damaging effects of fake news and deepfakes, ensuring a more informed and democratic populace.

An intersectional approach for youth participation for a better Internet governance future

Representation:

  • The creation of spaces with co-management structures for youth to provide policy recommendations while monitoring the implementation of these actions.
  • Promoting a requirement for young people to have a ‘seat at the table’ by developing standards and recommendations to encourage the fostering of meaningful participation and inclusion in high level discussions.
  • Increasing the stream of additional stable funding and income for youth organizations (including youth councils), such as operational grants, thus, assessing and monitoring successful implementation.

Education:

  • Strengthen both non-formal and formal education via collaborative efforts amongst the relevant stakeholders such as young people; decision makers; experts; academia and many more by working towards and understanding how and why education needs tailored to specific groups through the following topics: youth participation, digital literacy (including how bias is present in the online sphere), inclusivity and accessibility, and finally, through critical thinking.
  • Recognise the current work of non-formal education agents on these topics for the creation of an organized European curriculum that can be implemented in formal education.
  • This educational program would follow the ‘youth for youth’ principle, where the young people would be, within the participatory model, included in all aspects of design and development processes, adopting an intersectional approach.

Inclusivity:

  • Inclusivity starts with language, and we should ensure that all policies and regulations are transparent and comprehensible to everyone, specifically the youth; limiting things like overuse of technical jargon. Policy briefs and documents need to have a youth-friendly version which can explain things transparently, including how these policies affect everyday life.
  • An intercultural aspect is also crucial for an inclusive environment, therefore LLMs should be trained in different (European) languages to reduce inequalities in access to information and knowledge. To ensure accuracy, this needs to have some level of human verification wherever possible. This could be achieved, for example, in partnership with local universities.
  • Apply standards that make online spheres more accessible, demanding all websites to have accessible features such as services for people with disabilities (visual and hearing impairments) to ensure that access to all content is equal.

For the successful development of the points above, it is crucial to have secured stable funding and adopt an intersectional approach to ensure no one is left behind.

Fair and Privacy Preserving Use of Data

Data Privacy shouldn’t be subject to one’s personal, social and economic status.

  • Meaning that: Information about data collection and data use must be presented to users transparently, in simple language and easy to understand.
  • A crackdown on dark patterns – outlawing dark patterns and empower consumer agencies to identify.
  • Creating a standard of having to opt in for sharing data without penalty or exclusion of the users.

In the long term, we demand to take the burden away from users in terms of privacy experience and shifting power back to them, allowing users to decide what information to be shared about themselves.

New Economics of Data
We acknowledge that Data is an asset, the product of users labour and it is used as a commodity to facilitate price discrimination. Therefore, we advocate for preventing companies from increasing prices of products and services based on users’ personal data shared without their consent

Age verification
We are seriously concerned about children’s welfare, and we acknowledge that an effective technical solution to protect children online without infringing on privacy has yet to be discovered.

Entities who wish to improve children’s welfare online should not expand on privacy reducing technocentric solutions, but prioritize:

  • Strengthening law enforcement financially, educationally and structurally.
  • Shifting liability to providers of explicit and mature rated content.

Find the Messages from previous years in our archive.

More information on the wiki

Downloads 2024