Â
Messages from Vilnius
17 â 19 June 2024
Download your digital copy of the “Messages from Vilnius“!
Â
17 â 19 June 2024
Download your digital copy of the “Messages from Vilnius“!
Â
EuroDIG looks forward to the opportunities to amplify the purpose and objectives of the Global Digital Compact (GDC) following the Summit of the Future. EuroDIG offers to play its part in contributing to and monitoring progress in the implementation of GDC commitments. EuroDIG strongly supports the commitment in the Rev.1 draft text of the GDC to strengthen the multistakeholder model of governance and advises against any watering down of this commitment in the finalisation of the text. EuroDIG believes that the GDC process should build on the strong foundations and accomplishments of the WSIS instead of creating new mechanisms. EuroDIG supports enhancing the role of the UN IGF. EuroDIG recommends simplifying the process of GDC follow up and review of the implementation of the GDC commitments, with a substantive role for the Internet Governance Forum and the WSIS Forum so that non-governmental stakeholders can fully contribute to the process.
EuroDIG recognises the positive linkages between the WSIS+20 Review, the Global Digital Compact and the 2030 Agenda for Sustainable Development.
EuroDIG believes there needs to be a comprehensive understanding of the process of the WSIS+20 review.
The review should not undermine the achievements of both the 2003 and the 2005 phases of the WSIS. It should instead be primarily an opportunity to enhance the processes of Internet governance, global digital policy and cooperation, by using the Sao Paulo Guidelines as an inspiration for making processes more inclusive, transparent and accountable.
Â
Subtopic 1 | Human Rights in the Digital Era, Europeâs Role in Safeguarding Human Rights Online Subtopic 2 | One for All, all for one: The Role of cooperation in Enhancing Cyber Resilience in Europe Subtopic 3 | You on Signal and Me on Telegram â Messenger Interoperability by EU Regulation
There is an urgent need to ensure effective implementation of human rights frameworks online and to address the lack of awareness of these frameworks. A concerted multistakeholder effort is crucial to a) help develop a clear and robust legal framework, b) ensure a proactive role of civil society in influencing discourse, shaping and monitoring implementation, and c) emphasize the social responsibility of the private sector and its duty to respect the rights of users.
Concerted and coordinated efforts must be built on trust, cross-sectoral collaboration and international cooperation because they are vital to address cybersecurity challenges.These should include mechanisms for cyber cooperation in critical situations such as wartime. This requires training and education, as well as inclusive cybersecurity measures that cater to all segments of society.
Interoperability relies on the technical – protocol – interoperability, which is being addressed through standardisation in the Internet Engineering Task Force (IETF), as well on the operational and economic will to connect and exchange. It is vital to develop and refine mechanisms of market evaluation, enhance user choice, maintain end-to-end encryption and privacy across different platforms. The extraterritorial implications of the Digital Markets Act (DMA) and the potential impact on users who rely on non-EU messaging services must be addressed.
Subtopic 1 | GovTech Dynamics: Navigating Innovation and Challenges in Public Services.
Digital transformation in Public Administrations (PAs) comes with the need of stronger digital skills, which may come with mandatory training for civil servants. Challenges include digital skill gaps, data analysis capabilities, and regulatory barriers, requiring a shift towards enabling innovation. Cities and other PAs can innovate in collaboration with academia and the private sector with such projects as the GovTech Lab. These labs test new policies and technologies, fostering innovation through skill development and co-creation. Design thinking and user experience should prioritize simplicity and functionality. Cities can use open data dashboards to be more transparent to citizens by allowing them to easily visualize data about their living environment. Future challenges include digital identification, AI regulations, and ensuring technology accessibility for all, including senior citizens. Practical strategies and public co-creation are necessary for meaningful change.
Subtopic 2 | European approach on data governance
The new EU legislation on data is creating new scenarios. Despite it, the EU GDPR, the CoE Convention 108+, and the âprivacy at all costâ approach are still central as the Digital Governance Act recognizes the prevalence of privacy legislation. Tension exists between the need to explore data and to open it for the PA to be transparent and the need to protect citizensâ right to privacy. The new EU legislation (Data Governance Act and Data Act) tries to strike a balance between the two. The European values enshrined in GDPR are being adopted elsewhere, both because of EU influence and because of recognition of their validity. Furthermore, the CoE Convention 108+ is open for signature by non-member states too.
Subtopic 3 | Empowering communities: partnerships for access to services
Digitalization became more and more relevant after Covid and the new climate change-related catastrophe. Digital instruments allow rescuers and PAs to quickly identify who is in need and where on a specific territory. Nonetheless, catastrophes also make the digital infrastructure vulnerable as disruption in communication can be caused by unusual weather events. Large amounts of the population still have no or little access to the Internet, which is particularly true for people living in low income and/or remote areas. This yields discrimination in access to services and opportunities. However, this digital divide can be bridged with relatively cheap connectivity infrastructure: examples exist of public-private partnerships that reduce costs for bringing connectivity to rural areas. On top of improved connectivity, services should be easily accessible with straightforward interfaces that require little expertise (accessibility by design).
Subtopic 1 | Innovation and ethical implication
The proliferation of AI-related initiatives and documents and the adoption of regulatory and human rights frameworks is key to fostering userâs trust in AI technologies, to tackle AIâs complexity and applications and to provide tailored solutions to the specific needs of the diverse stakeholders. A multistakeholder approach to AI governance is crucial to ensure that AI development and use are informed by a variety of perspectives to minimise bias and serve the interests of society. A pressing ethical concern is the military use of AI which is yet to be addressed by existing regulatory frameworks but will need more focused attention in the near future.
Subtopic 2 | The Framework Convention on AI and human rights, democracy and the rule of law
The CoE Framework Convention on AI and human rights, democracy and the rule of law is an important step towards a global approach to AI regulation. The CoE Framework and the EU AI Act compliment each other. Further steps should follow, by taking into account the need to address the growing issues of AI from a global, rather than a regional perspective.
Subtopic 3 | Identification of AI generated content
Current AI detection systems are unreliable or even arbitrary. They should not be used other than in an experimental context with a very high level of caution and particularly not for assessing works of students. Without reliable AI detectors, we have to rely on education and critical assessment of content that takes into account that any content can easily be generated by AI. Watermarking and certification of origin should be a more reliable means to authenticate content and should be supported by regulation.
Â
Rapporteur: Francesco Vecchi, Eumans
Workshop 1a discussed three recent measures on the protection of children from online Child Sexual Abuse (CSA): the proposed EU CSA Regulation (CSAR), the new UK Online Safety Act, and the positive results from the Lithuanian Law on the Protection of Minors against detrimental effects of public information. An agreement was found on the need for better regulation in this field, emphasising the accountability of online service providers for monitoring illegal and harmful material and safeguarding minors.
CSA is currently increasing exponentially and has serious consequences for the rights and development of children. For this reason, recognising such depictions and preventing child sexual abuse should go hand in hand. Participants are concerned about the safety of users, including with regard to the potential use of technology. Breaches of confidential communication or anonymity are seen critically. At the same time, advantages are recognised in the regulations, e.g. with regard to problem awareness or safety by design approaches. Age verification procedures are perceived as both a risk and an advantage. However, this should not be at the expense of anonymity and participation.
The participants of Workshop 1a of EuroDIG believe privacy and safety are intertwined and inseparable, advocating that legal solutions to combat child sexual abuse online must strive to optimise both. These measures should be centred on childrenâs rights and their best interests, as a way forward to achieve this balance.
Rapporteur: Francesco Vecchi, Eumans
Rapporteur: Francesco Vecchi, Eumans
[1] See the full document here.
Rapporteur: Francesco Vecchi, Eumans
Rapporteur: Francesco Vecchi, Eumans
Rapporteur: Francesco Vecchi, Eumans
Â
Discrimination in AI Moreover, there is a pressing need for legal clarification regarding responsibility for discrimination in AI. Clear guidelines and accountability measures must be established to effectively address and prevent bias. Bias of Policy Makers Due to Techno-Solutionism A multistakeholder approach is also fundamental. Collaboration across different sectors and disciplines will ensure that diverse viewpoints are considered, which leads to more applicable and comprehensive policies. To facilitate this, we propose the creation of a dedicated body focused on intersectional and interdisciplinary collaboration. This body would meet regularly to assess ongoing issues and work towards continuous improvement through cooperative efforts. AI in Border Control Furthermore, AI is not a solution to the migration crisis. In fact, the inherent biases and risks associated with AI could worsen discrimination and lead to unjust outcomes. Therefore, the use of AI in border control should be prohibited, prioritising human rights and ethical considerations above technological solutions. These policy propositions aim to address critical issues at the intersection of AI and human rights. By promoting transparency, accountability, and interdisciplinary collaboration, we can ensure the ethical and fair use of AI technologies. Education Data for AI training Given the vast scope of the data in question and the economic interests of businesses, this issue should be standardized at the international or EU level. Precise recommendations and obligations should be imposed on companies, non-governmental organizations, governmental institutions, and all stakeholders. This regulation should ensure that consent is obtained from individuals whose biometric data are being used to train models that generate new content. Technosolutionism We urge integrating a comprehensive, multistakeholder impact assessment and analysis of both actual and potential checks and balances before implementing AI as a problem-solving tool, be it in digital policy or for practical issues. We urge policymakers to carefully consider this impact assessment, provide justification for their decisions, and be held accountable based on the assessment and associated risks and costs. Deepfakes We propose a system to confirm the authenticity of information, such as a “badge of authenticity” using a QR code or blue tick circle. Media houses could use this system to verify content. Additionally, educating citizens on recognizing misinformation and working with technology companies would strengthen this effort. These measures will help protect society from the damaging effects of fake news and deepfakes, ensuring a more informed and democratic populace.
Artificial Intelligence (AI) has the potential to reinforce and create new forms of discrimination. This stems from the inherent biases in data, which is far from neutral and often reflects existing societal prejudices and bias. To face these issues, we propose transparency as a key action. Implementing synthetic data and involving focus groups, particularly those representing minority and intersectional backgrounds, can ensure a more balanced and inclusive dataset, making AI systems more sensitive to diverse perspectives.
Policy makers sometimes fall into the trap of techno-solutionism, relying heavily on technological fixes without considering the broader social context. To counter this bias, an interdisciplinary approach is essential. By involving experts from various fields â technology, sociology, ethics, and law â more sustainable and holistic solutions can be achieved, including the professional background of multiple representatives.
The use of AI in border control raises significant human rights concerns, particularly for refugees and individuals crossing borders who are inherently vulnerable. The collection of biometric data often occurs without proper consent, exacerbating these concerns.
Our goal is to empower individuals through education, to enable them to assert their AI and digital rights and critically analyse technological solutions. We encourage the implementation of constructive and informative campaigns to raise awareness of AI impacts. Additionally, we advocate for the integration of AI literacy into school curriculums.
AI programs are âtrainedâ by being exposed to large quantities of existing works, photos, information, and data. Public awareness of how our data are used, especially in training new AI programs, is very low. Most people are unaware that their personal photographs are being used for AI training. Even when notifications are provided, they are often buried in general Terms and Conditions or presented in a way that users do not fully understand. Currently, users can opt-out, but as this issue grows, we propose that users must explicitly opt-in, following the GDPR example. Obtaining explicit consent for using personal content should become paramount.
Using AI to solve problems may seem progressive, glamorous, and investment-worthy. However, AI might not be the most efficient way to solve a problem, for example, in public services from waste to migration management. In fact, it may even create new issues, as is expected with online child safety measures, such as biometric age verification or client-side scanning.
Deepfake videos are increasingly common in the media, especially during crises and elections. This misinformation prevents rational decision-making, increases suspicion in institutions, and harms democracy. To combat this, Europe needs a legal framework for deepfake usage, funding for detection technologies, and mandatory labeling for all deepfakes to ensure transparency.
Representation:
Education:
Inclusivity:
For the successful development of the points above, it is crucial to have secured stable funding and adopt an intersectional approach to ensure no one is left behind.
Data Privacy shouldnât be subject to oneâs personal, social and economic status.
In the long term, we demand to take the burden away from users in terms of privacy experience and shifting power back to them, allowing users to decide what information to be shared about themselves.
New Economics of Data
We acknowledge that Data is an asset, the product of users labour and it is used as a commodity to facilitate price discrimination. Therefore, we advocate for preventing companies from increasing prices of products and services based on usersâ personal data shared without their consent
Age verification
We are seriously concerned about children’s welfare, and we acknowledge that an effective technical solution to protect children online without infringing on privacy has yet to be discovered.
Entities who wish to improve childrenâs welfare online should not expand on privacy reducing technocentric solutions, but prioritize:
Find the Messages from previous years in our archive.