Messages from Strasbourg

12 – 14 May 2025

 

Main Topics

Main Topic 1: Why the WSIS+20 Review Matters and How National and Regional IGFs Can Enhance Stakeholder Participation.

Rapporteur: Mark Carvell, Internet Governance Consultant; Vladislav Ivanets, University of Gothenburg

  1. EuroDIG stakeholders welcomed the updating by Co-facilitator H.E. Ms. Suela Janina on the opportunities for meaningful stakeholder participation in the Review which will ensure its legitimacy. EuroDIG stands ready to contribute as a channel for inputting key issues and establishing the necessary momentum for achieving the right outcomes with a forward-looking agenda. EuroDIG recommends that all stakeholder inputs and proposals be included on the UN’s official record.
  2. The Internet Governance Forum (IGF) is a fundamental platform for multi-stakeholder cooperation, and should be granted a renewed permanent mandate.The IGF needs to implement the substantive improvements expressed during the open consultations, including enhancing its inclusivity and ensuring its outcomes lead to action.
  3. The WSIS+20 Review process should be conducted in a transparent, inclusive, and diverse manner. A multi-stakeholder approach is key to achieving this. The upcoming action plan should align with the SDGs and the priorities for action set out in the Global Digital Compact (GDC).
  4. Review should address new and still-existing socio-technological and economic challenges, such as the north-south digital divide and gender inequalities, human-centric AI regulation, climate change, human rights online, and maximising diverse stakeholder participation in the global digital economy from all regions.
  5. National and Regional IGFs (NRIs) are important and powerful engines for the bottom-up and multi-stakeholder approach and therefore they should be actively included in the Review processes. National and regional initiatives are fundamental in providing national perspectives and mechanisms for policy adoption.

Main Topic 2: Neurotechnology and Privacy: Navigating Human Rights and Regulatory Challenges in the Age of Neural Data

Rapporteur: Minda Moreira, Internet Rights and Principles Coalition (IRPC)

  1. Definition
    It is difficult to conceptualise neural data and the debate on the definition is still far from reaching consensus as everything could potentially be called “neural data”. An interdisciplinary approach is crucial to better address the concept of neural data, as well as its implications and challenges.
  2. Impacts
    Neural technology has great potential but comes with great risks particularly when combined with AI. As neural data starts to be applied not only for medical but also commercial purposes, the mind – the last frontier of human privacy, is at risk of being manipulated and influenced, affecting not only the right of privacy but also other rights such as the right of freedom of thought and expression.
    Whether a new human right – “neuroright” – is needed or whether neural data protection is already included in existing provisions is a matter that needs to be fully addressed by legislators and policymakers.
  3. Regulation
    Existing regulatory frameworks, such as Council of Europe Convention 108+ or the General Data Protection Regulation (GDPR) already include provisions that could apply to neural data, particularly the right to privacy, but they have yet to address it as they should. It is crucial to identify potential gaps and enhance legal safeguards so that the unique sensitivities of neural or mental data are addressed. It is important that human beings are the focus of regulation rather than the technology itself. Global regulatory frameworks and cooperation as well as technology governance that is both rights and value based are needed to address cross border challenges.
Main Topic 3: Europe at the Crossroads: Digital and Cyber Strategy 2030

Rapporteur: Karen Mulberry, IEEE SA and Oksana Prykhodko, iNGO European Media Platform

  1. A strategy for Internet resiliency has to include measures for economic resiliency to ensure adequate investment in development, and a new geometry of interconnections that are driven by both current demand and support resilience by providing redundancy capacity. Other relevant topics include ensuring sufficient energy and water supply, and initiatives to make for human capital more available by closing the skills gap for professions such as specialists on AI cybersecurity.
  2. The European regulatory field needs smarter, non-prescriptive risk-based frameworks aligned with international security and information risk-management standards. MSH approach and public-private cooperation are key to achieving harmonisation and standardisation and balance between new technology and human rights. Disproportional measurements, particularly on smaller and midsize organisations, may increase market concentration and undermine Internet resilience. European future should be in advancing a shared vision of the common interest.
  3. Some elements of successful strategy: supporting best practices like IPv6 for sustainable grow and RPKI for routing security, recognising the technical expertise of global Internet coordination bodies in their roles ensuring global interoperability, reliable encryption to avoid political, economical, malicious, military risks, market decentralisation, which can help even under military aggression, focus on innovation
Main Topic 4: Transatlantic Rift on Freedom of Expression

Rapporteur: Yrjö Länsipuro, ISOC Finland

  1. Tensions between tech giants and EU and other European regulation, nothing new in themselves, are now getting increasingly entangled with transatlantic political conflicts, with Internet issues risking to become pawns in disputes on trade or defence policies. This has exacerbated, at least on a rhetorical level, the discrepancy between European and U.S. interpretations of the freedom of expression. European-style regulation – against harmful content or election interference – has been labelled as outright censorship.
  2. As to how Europe should reply to the pressure, there was a consensus that retreating was no option. While continuing the transatlantic dialogue and trying to correct obvious American misunderstandings about the nature of DSA, DMA and other regulations, Europe should make clear that it will defend its basic principles. On the other hand, the European regulatory instruments continue to be refined, simplified and made smarter.
Main Topic 5: The Age Verification Dilemma: Balancing Child Protection and Digital Access Rights

Rapporteur: Desara Dushi, Future of Privacy Forum

  1. It is important to find the right way to address child protection with age verification tools as these tools affect everybody, not just kids. They create risks to privacy and security, digital services and impact the global interoperable internet which affect trust on the internet. To safeguard from these risks we need to ensure that age verification tools comply with the requirements. We cannot have full proof technology. The negotiations have to move to a global level, involving all stakeholders. We must develop context-aware, rights-based approaches.
  2. Technology can deliver. To achieve the best interest of the child, we need to make the Internet age aware, not identity aware. The child-friendly platforms can only work if the internet knows at least the age range. However, we need to ensure that the measures taken do not leave anyone out, increasing the digital divide.
  3. The digital environment wasn’t created for children, even though many of them are now online. That’s why we need to bring the children’s perspective in the debate. Age assurance is not a one-step or single solution. While taking this measure, we need to make sure children are not excluded from the internet, by taking proportionate steps, including having more than one option. When designing an environment for children, we need to design it with them.

 

Workshops

Workshop 1 | AI & Non-Discrimination in Digital Spaces: from Prevention to Redress

Rapporteur: Minda Moreira, Internet Rights and Principles Coalition

  1. Prevention
    The session agreed that more needs to be done to address group-based discrimination and inequality may not be solved with AI alone. It may be necessary to assess if AI is really needed or if non-technical solutions may be more effective. Where AI is needed, transparency and accountability is crucial. Bias detection with mandatory impact assessments must be used as well as involving and consulting impacted communities in the AI design and development processes combined with stronger powers for equality bodies and industry best practices.
  2. Mitigation
    Algorithmic discrimination is difficult to detect and to prove and those affected find it difficult to access justice. When it comes to intersectional discrimination it is even more difficult not only because of the resistance of states and international courts to recognise them but also to effectively work with the affected communities. There are main barriers to effective AI regulation for tackling discrimination and bias and session participants agreed that those include the lack of transparency, and accountability, access to data and training sets, as well commercial secrecy and funding.
  3. Redress
    Access to adequate funding particularly to equality bodies is a main barrier to access justice.
    Some steps are being taken by advocacy groups to collaborate with regulatory bodies but a multistakeholder approach at a global level involving civil society, private sector, equality bodies and affected communities is vital for meaningful cooperation and to fully address discrimination in all its forms, particularly intersectional discrimination.

Workshop 2 | The Interplay Between Digital Sovereignty and Development

Rapporteur: Karen Mulberry, IEEE SA and Yannic Plumpe, TUM Think Tank

  1. Defining and achieving digital sovereignty in Europe will require a multifaceted approach, balancing autonomy with global cooperation, and addressing both technological and societal challenges.
  2. Creating effective development strategies for digital sovereignty must be done through partnerships and collaboration that balances the needs and creates opportunities for all European stakeholders.
  3. For Europe to become a stronger player and less dependent on outside innovation, competition policy needs to be more balanced and focused on creating growth opportunities and providing support for companies, especially encouraging the development of technical advances within Europe.
Workshop 3 | Quantum Computing: Global Challenges and Security Opportunities

Rapporteur: Jörn Erbguth, University of Geneva

  1. Quantum computers at scale are not there yet. However, they start to be able to solve some problems faster than classical computers. Hybrid computers can be used to augment limited scale quantum computers. We need to open quantum computing to everyone, not only a few countries and corporations.
  2. Quantum computers will be able to break current cryptography. Quantum is unavoidable. We should not try to prevent it but to embrace it. We need to raise awareness.
  3. Current risk is harvest now, decrypt later. We need to do the migration to quantum proof encryption now, because it will take years. Countries need to take an inventory of the cryptography used and include all devices as for example IoT devices. The migration to quantum proof encryption risks to create a new digital divide.
Workshop 4 (NRI-Assembly) | How Can the National and Regional IGFs Contribute to the Implementation of the UN Global Digital Compact?

Rapporteur: Mark Carvell, Internet Governance Consultant; Vladislav Ivanets, University of Gothenburg

  1. NRIs should act as key enablers and catalysts for change in local digital spaces by adapting global principles and encouraging open discussion among local stakeholders. This can be achieved by channeling grassroots perspectives and innovations into global policies, while contextualising the global commitments into unique cultural contexts, economic conditions, and regulatory environments.
  2. NRIs should act as feedback loops for local communities, providing the outcomes of their work in the form of publicly available reports. NRIs can drive capacity building and address national or regional digital issues through events such as workshops and targeted programmes, with the special attention to vulnerable groups.
  3. NRIs should make use of the digital tools available, such as the ‘GDC implementation map’ when it is launched, to navigate the critical milestones and KPIs, collaborate and contribute at regional and global levels with the assistance of the IGF Secretariat. NRIs should aim to become visible by getting involved and by contributing to the existing roadmaps.
Workshop 5 | Bridging Digital Inequalities and Challenges in Multicultural Societies

Rapporteur: Jacques Beglinger, EuroDIG Board

  1. Infrastructural and social gaps lead to disparities in internet access, especially in geographically isolated and peripheral areas, as e.g. highlighted in the ISOC-IL report, but also amongst different groups in society, including different generations.
  2. Diversity and unequal opportunities lead to differences in internet usage patterns, affecting digital literacy skills and the capacity to effective online engagement in eminent areas like retrieving information, public and private digital services from social media to online banking, as well as education in general. Factors of age, gender, culture, and others also lead to increased vulnerability of specific groups to online harms, such as phishing, misinformation, and cyberbullying.
  3. These divides require actionable strategies and policies for a more diverse, inclusive and equitable digital future. To this end, leadership, multi-stakeholder collaboration and focused, well thought through projects are essential, including engaging governments, civil society, academia, and tech companies.
Workshop 6 | Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies

Rapporteur: Jörn Erbguth, University of Geneva

  1. Within one year the use of AI in business has risen from 35% to 75%. Employees have mixed feelings about it, with a slight majority being positive and a strong minority being negative.
  2. There is a lack of upskilling, governance and ethical policies in place. The use of AI has to be transparent and accountable. It needs to be monitored and evaluated. Impact assessments are required when there is a possible risk to human rights.
  3. It is too early to see an impact of the EU AI Act and even more the impact of the CoE Framework Convention of AI and legal certainty has to settle in. This needs to be evaluated in the future. UN Guiding Principles on Business and Human Rights from 2011 provide important guidance as well. Human rights tracking tools can track the adherence to and implementation of human rights by nations and corporations.
Workshop 7 | Generative AI and Freedom of Expression: Mutual Reinforcement or Forced Exclusion?

Rapporteur: Desara Dushi, Future of Privacy Forum

  1. Gen AI has the potential to diminish unique voices, including minority languages and cultures. It poses integrity issues, problems with identifying whether content is created by humans or technology. It also has the power of persuasion (including via disinformation that it enables) and influences market dynamics. It can also be used to facilitate gender based violence.
  2. Journalism and genAI contradict each other. The former is about facts, the later calculated probabilities. There is a risk of standardized expressions as well. However genAI offers also opportunities for journalism helping to bring more (tailored) content to the audiences. Questions that remain: accuracy, impact on humans (creativity vs laziness), funding/reliance. The risk of using AI in journalism is losing control over news production and quality, which might impact also the future of the business model. One of the main issues will be keeping journalism visible and keeping the connection with the audience.
  3. We should take AI seriously, be aware of what they can and cannot do and their rapid development/impact in the near future creating a lot of uncertainty in terms of dynamics and impact on freedom of expression. There is a risk of omniscience. AI, including GenAI, has implications not only on freedom of expression but also on privacy (surveillance) in terms of freedom of expression which leads to the control of perception. We need to act on a networked and collective level.
Workshop 8 | How AI impacts Society and Security: Opportunities and Vulnerabilities

Rapporteur: Jörn Erbguth, University of Geneva

  1. Most politicians and citizens do not have a sufficient understanding of AI. There are also gaps in the skills taught by universities and required by business and technology that need to be filled.
  2. Dual uses (good and bad) are prominent. We see threats and progress equally strong.
    There is no real opposition between security and ethics in AI. We need to have both.
  3. Current governance models do not cope with opportunities and challenges of AI well and need to evolve.
Workshop 9 | Between Green Ambitions and Geopolitical Realities: EU’s Critical Raw Materials Act

Rapporteur: Constance Weise, IEEE

  1. Although the demand is high, critical mineral deposits are limited and geographically scattered. While most refining capacity is concentrated in China, Europe faces acute supply chain vulnerability. The EU’s Critical Raw Materials Act seeks to mitigate this risk through its 2030 targets – 10% domestic extraction, 40% EU processing, 25% recycling, and a cap of 65% of reliance on any single external supplier. This is demonstrated by Finland’s raw materials’ project, which combines local mining, European refining, green financing, and strict ESG standards into a replicable model for securing strategic raw material supply. While this is a good example to be replicated, the European Union depends on its member countries signing partner agreements to ensure a supply diversification.
  2. ‘Criticality’ means different things to different regions. Yet mostly, it is not critical to the countries that are most in need. With the attachment to energy and security come ethical concerns. For instance; in DRC, cobalt is destabilising the region, incl. environmental degradation and human insecurity. We need to create a sustainable world taking into account the social element – whereby countries all abide by the same ethical due diligence. In order to achieve improved holistic processes with the community, we need to ensure that all people are engaged and consulted.
  3. The EU needs to have a strict monitoring process in place that encourages companies to implement the regulations. Technology development can occur best when environmental impacts are minimised and benefits for the environment and the community are maximised.
Workshop 11 | São Paulo Multistakeholder Guidelines – the Way Forward in Multistakeholder and Multilateral Digital Processes

Rapporteur: Bruna Martins dos Santos, WITNESS

  1. We need more comprehensive processes: Identifying stakeholders, preparing draft outcomes in a transparent way, factoring in feedback from the wider community and Open decision-making are some of the steps that can be easily extracted from the Sao Paulo Multistakeholder Guidelines in order to foster trust in the various process and ensuring that outcomes reflect a true multistakeholder consensus, not just a procedural formality.
  2. The multi stakeholder approach needs to be applied in a way that is meaningful and relevant to the specific context and issue under discussion. And in this context, we need less limitations to the conversation and foster more participation. We need a dialogue and improved exchange of views between all stakeholders is more needed than ever. The Sao Paulo Multistakeholder Guidelines are a good path for more inclusion and reaching beyond the usual suspects interested in Internet Governance.
  3. A critical approach to consensus is needed. In a context where power is unequal, consensus building could lead to minority voices being either silenced or a source of disruption. It is crucial for truly multi-stakeholder inclusive approaches to digital related issues and processes to identify, acknowledge and document the differences.

YOUthDIG Messages

Offline Solutions to Digital Problems

Digital Literacy

  • Problem: Children are becoming increasingly susceptible to the dangers of the digital world, including harmful content, manipulative narratives and echo chambers. We believe that digital literacy is essential in equipping young people with the skills needed to critically engage with online information, protect themselves from these threats and gain positive digital experiences. Many people do not realise how algorithmic manipulation (particularly social media) can play a role in the creation of breedings grounds for polarization.
  • Solution: Education in digital literacy should provide individuals with vital skills for their digital future. This begins with comprehensive education in schools. They should empower students to identify disinformation, understand how digital platforms shape perceptions, and develop resilience against manipulation, with teachers being provided targeted training and resources to guide these discussions. This is where the private sector can contribute meaningfully through the development and support in training initiatives and community resources.

Other Stakeholders and Considerations

  • Parents also play a crucial role in fostering informed and safe digital practices at home. As the primary influence in a child’s early development, they must not only monitor screen time, but foster open dialogue about online experiences and encourage offline socialization.
  • Public information and awareness campaigns can further promote vigilance across all age groups. Above all, digital literacy must remain politically neutral and rebuild public trust. At the moment widespread skepticism fueled by perceptions that scientists are biased or that politicians act out of self-interests can undermine the credibility of accurate information, there is a lack of trust which impedes adoption of education.
  • As online spaces for youth become increasingly controlled and restricted, offline solutions offer a crucial alternative by creating physical environments where young people can freely interact, organize, and express themselves. These offline spaces provide safe havens for youth to develop the skills and confidence needed to navigate digital spaces responsibly and advocate for their rights online. By fostering a strong offline community, we empower youth to reclaim agency in both the digital and physical worlds, ensuring their voices remain heard.

Content Moderation from Two Perspectives

Algorithms and Content Moderation.

  • Problem now: One of the key problems that make young people so addicted to the digital space and online communities is the strength and specificity of the algorithm which can create polarisation.
  • To address this issue, users must actively engage with algorithms that were once hidden and secretive but have now become central to how we experience content. Instead of passively scrolling, users should be empowered to give feedback on what they want to see more or less of, taking control over the content they consume. Platforms should not only explain why specific content is being shown but also offer alternatives – different accounts, channels, or media – so users can actively seek out diverse perspectives and reshape their digital experience.

Free Speech vs Content Moderation

  • Countries have a right to enforce their laws to protect their national sovereignty. Private companies controlling content online can undermine state authority and democratic oversight, giving governments unchecked power over the internet risks censorship which can lead to authoritarian control. The ultimate power over digital spaces should rest with the people – through transparent, democratic governance that protects both freedom and accountability.
  • We should have ways to enforce content moderation that align with the right of free speech. To make sure our digital ecosystem does not enlarge any political bias in content moderation such as flagging and shadowbanning, we should allow individuals to be able to be exposed to all sides or all information. Individuals should have personal control over the information they consume online.
  • Emphasize the importance of promoting user control tools on social media and platforms, such as allowing users to manage their own feeds and engage with democratic content moderation tools like lagging and community notes. This can lead to a positive political involvement and enhances awareness, making sure users take an active role in shaping their online experience. This requires raising awareness of what tools are already in place as well as expanding the application of them.
Everything AI

Increased Energy Demand and AI

  • Increase collaboration for energy- and cost-efficient computing: As the demand for computational power grows, it’s essential to develop greener, more energy-efficient GPU systems. Governments, companies, youth, academia, tech communities, and civil society must work together to advance sustainable infrastructure for future computing needs.
  • Make data centers smarter and more sustainable: Promote the creation of smart cities that can utilize the waste heat produced by data centers, and move data centers to operate with environmental feasibility as a key stan­dard.
  • Ensure transparent energy usage in AI: Introduce accessible and comprehensible information about energy consumption and associated costs when using AI tools, both for end-users and developers. This transparency will foster more conscious digital behaviour.

Making AI Practical, Understandable, and Accountable

  • Indicate when AI is used: Digital services and platforms should inform users when an AI is operating, using visible markers or labels.
  • Develop ethical guidelines on AI application: Establish clear ethical frameworks regarding how and where AI tools may be used, shaped with input from young people who will live with the consequences of these technologies.
  • Educate for AI literacy and critical thinking: Embed AI education in school curricula and lifelong learning initiatives. Training programs should emphasize how AI functions, its limitations, potential errors, and the continuing responsibility of humans to critically assess and oversee AI systems.
  • Assess risks before deployment: Ensure that AI systems undergo impact and risk assessments before implementation. Define accountability measures and clarify legal liability in the case of AI-related errors.
  • Automate responsibly: Digitalize and automate more administrative tasks through governed AI tools to increase efficiency, while maintaining transparency and oversight.

Innovation and Inclusive AI Development

  • Create frameworks for ethical AI innovation: Establish dedicated legal and policy frameworks that facilitate innovation in AI while minimizing economic burdens. These frameworks should ensure inclusivity and sustainability in access to AI development tools.
  • Support the next generation of AI pioneers: Provide more funding, mentorship, and sandbox environments for students, researchers, and entrepreneurs to explore and experiment with AI technologies responsibly.
  • Strengthen the startup ecosystem: Increase financial support and visibility for startups innovating in the AI field, especially those with social impact and sustainability goals.
  • Empower youth, participating in shaping the ‘AI Continent’ – a digitally sovereign future where young people are actively engaged in developing, governing, and ensuring responsible access to AI tools across all sectors. Their voices and leadership must drive ethical, environmental, and economic integration, making youth central to the AI vision and innovation.
Human Rights and Data Security
  • Software development companies, as key stakeholders in digital governance, must be held accountable for ensuring data protection. This responsibility involves deploying robust technical mechanisms, such as developing and maintaining secure IT infrastructure, and ensuring compliance with data privacy legislation. By doing so, they safeguard critical infrastructure and sensitive data, thereby fulfilling their duty to protect public trust and uphold security.
  • Governments are responsible for regulating the use of automated decision-making systems and the handling of sensitive data, with a particular focus on ensuring transparency and protecting the rights of vulnerable groups – such as migrants, asylum seekers, and members of the LGBTQ+ community – who risk being disproportionately affected by the unbalanced power held by digital platforms.
  • The use of surveillance technologies, including spyware, poses a serious threat to the right to privacy, personal security, and freedom of expression. Journalists, human rights defenders, and activists are particularly vulnerable to these intrusions, which can lead to self-censorship and suppression of dissent. Governments must be held accountable for the deployment of such tools, especially when used to silence opposition or restrict civic space. Without transparency and oversight, the misuse of surveillance erodes democratic institutions and fundamental rights. The internet should not be exploited as a tool of control by those in positions of power to manipulate public opinion or suppress dissent.
  • Greater resources should be allocated by the governments to judicial and executive authorities to strengthen the fight against cybercrime. These resources should include advanced technical tools, improved cybersecurity infrastructure, and ongoing training for judges, prosecutors, and law enforcement personnel. Legal consequences must be effectively enforced, especially in cases involving data breaches and unauthorized disclosure of sensitive information. EU-level support and cross-border cooperation are also essential to address the global nature of cyber threats effectively.
Digital Ecosystems / Regulation

We call for the creation of a strategic innovation ecosystem that places inclusion and collaboration at the heart of European digital development.

1) Smart Regulations

Smart regulation forms the foundation of our vision. We advocate for updated, simplified, and harmonized future regulations that support rather than hinder innovation. Technical experts must be systematically integrated into decision-making processes when creating regulations. Furthermore, emerging innovators should have access to free legal advisory services to navigate the complex regulatory landscape.

2) Strategic Innovation Ecosystem

The centrepiece of our proposal is a European Innovation Hub that serves as a catalyst for the development and the innovation of the digital field.

  • European Innovation Hub: This hub would incorporate a secure sandbox for testing innovations, interconnected talent centers across Europe, and a shared pool of essential resources including funding, knowledge databases, legal counsel, and technological solutions. The existing European innovation facilities (e.g. AI factory) will be involved in the European Innovation Hub. Universal digital accessibility must be guaranteed, while youth participation should be actively fostered through scholarships and study visits.
  • Governance: We call for a pan-European representation with flexible participation models that accommodate different financial capabilities of the participating countries. The Council of Europe should oversee the initiative to ensure adherence to human rights and rule of law principles. A rotating presidency system would ensure equitable leadership, while a multi-stakeholder governance body modelled as EuroDIG would guarantee diverse perspectives. To promote global inclusion, the physical infrastructure should be located in a developing country in Europe.

Find the Messages from previous years in our archive.

More information on the wiki

Downloads