The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MODERATOR: Good morning everyone. Both here in the room and those joining us online. It's a pleasure to welcome you all to this important session at the Internet Governance Forum 2025. This panel titled "AI at a cross roads" is a joint initiative between the laboratory of public policy and internet, the sustainable lab AI lab. We're truly honoured to such a global conversation. And I want to begin by thanking our distinguished panelists for being here today.
I'm Alexandra Krastins Lopes. Today I represent a Brazilian law firm where I provide legal counsel on cybersecurity on government affairs.
Now I'd like to pass the floor to Jose Renato.
>> JOSE RENAT O: Good afternoon. Thank you for being here. Thank you for introducing me. I'm a co‑founder of the laboratory of public policy internet. And I'm now also doing at Ph.D. at the University of Bonn. I'm very happy to be here.
Well, our discussion is exactly this intersection between sovereignty and the need for a secure technological developments that we carry out our consistent with very urgent need to tackle climate change, environmental collapse as a whole.
So we have been identifying how there's a discourse on not only AI but digital sovereignty as a whole. On different governments. European Union is an example, Brazil is another example. Countries like China and U.S. It's using same, more or less the same idea. And basically courses and policies have mostly be focused on a need to secure national security and also economic development.
At the same time, we have also been seeing a similar movement but within social movement. So different initiatives among Indigenous peoples or among workers movements are also talking about digital sovereignty. AI sovereignty, et. cetera. And in the global south, both of these nations lead these courses and social movement course are related with the history of dependency. Particularly on technology and infrastructure that dates back to colonial times. Through terms and periods through colonial and what many have called digital colonialism is also influenced in the courses.
And we know, at the same time, that AI is connected with the infrastructure. It's strongly dependent on minerals and water. How to advance the calls for further independence and control over the technologies and infrastructures while also avoiding expanding on the facts over the environment, which are leading mostly to climate change. We're also interested in understanding the global south and AI sovereignty as a whole. And that's why we have participants from different distinct backgrounds here. Government officials, representatives of international organizations, academia, and Civil Society, as well. Including from one social movement in Brazil, which is taking a lead to claim digital sovereignty over their activities.
So now back to talk about the policy questions we have for the panel. I'm looking forward to our discussion.
>>
>> MODERATOR: Thank you.
What are are the aspirations of governments and communities, including social movements and Indigenous communities with regard to AI sovereignty and how can they be addressed? And, finally, how can the stakeholder model of Internet Governance can be applied within the design of policies aimed at fostering sustainable AI sovereignty. So as to have the demands of social movement effectively taken into consideration.
So let's start with speeches from our dear panelists. I would like to check if Ana Valdivia is already with us. Okay. I'll introduce her. Ana Valdivia is a departmental research lecturer in artificial intelligence, government, and policy at the Oxford Institute at the University of Oxford. She investigates how algorithmic systems are transforming political, social, and ecological territories. The floor is yours.
>> ANA VALDIVIA: Thank you very much. Thank you very much for organizing this panel. I'm very pleased to be here. I'm sorry I cannot be there. I'm attending the international conference where we have been discussing this. Right.
Something very relevant we have found out was that LAMs and generative AI is becoming bigger and it doesn't mean it's becoming better.
LAMs that are bigger reproduce more sterotypes than smaller. So that comes with other side effects like environmental impacts. I've been analysing that for years now. And I've been analysing the supply chain of artificial intelligence. And something that I realize is that while national states have, like, the narratives to resist sovereignty. For instance, in the UK, the government wants to develop more data centres to be digital surveilling. There's, like, another part of this debate, that is this infrastructure cannot be surveilled because this infrastructure still has different minerals and other natural resources that are not embedded in our states. So let's say if the UK wants to become digital, it depends on other countries like Brazil, like Pakistan, like China, like Taiwan to develop all these infrastructures.
For instance, to develop the AI which are GPUs, you need cobalt, you need copper, you need aluminum. And these minerals are extracted from other geographies. And destruction of the minerals can have impacts on communities living nearby. As we have seen in the past related to geography.
But then the increasing size of AI algorithms like GPTs and other LLMs have other side effects. As I have said. Now it's not only about mineral structure. It's also about the processing. The training of these algorithms. And this comes with environmental impacts like water structure, and I have seen that in Mexico, for example. So in Mexico, we have the state that is inviting a lot of data centres and a lot of big tech companies to deploy their infrastructure there. While I can see, like the positive side of this. Which is, you know, the infrastructure of AI is going to be democratized because it's going to be present in different states. That comes with other sovereignties like the government is inviting the infrastructure without asking democratically to the communities whether they want these infrastructures there. Because the infrastructure we know the data centres are buildings that are connected to the electricity 24 hours. And 365 days a year. So that means that they are using water, they are using electricity, all the things. And it's becoming the only state in Mexico which 100% of the territory is at‑risk of drought. It means communities don't have access to water. This is something. When I visit the communities, I see how they don't have access to water. They only have access to water one hour per week. While on the other side, the infrastructures have access to water 24 hours a day.
So AI is not only nowadays reproducing stereoand biases. It's producing climate injustice. If we don't regulate how the infrastructure is becoming ‑‑ is being enacted in different geographies it has the consequences of climate justice.
So something that we have proposed in this conference on AI ethics is that rather than talk about digital surveillance that creates frictions between states. Because all the states in the world want to become digital surveil. We should talk about digital solidarity and we should talk about how we can have networks of solidarity. We have one state with other states and we have all together to develop digital and how we can become a community independent from big tech companies and all the innovation.
For instance, as an expert in AI, I could develop my own AI algorithms with my own laptop. And nowadays, I could see that the innovation on AI relies on big tech companies. We're not able to develop AI technology anymore. We have to depend on big tech companies. And that has become clear in the conference of networks how the GPT are developed by big tech companies. They are not developed by universities. They are not developed by other institutions. Technical institutions anymore. So it's not only about infrastructure. It's about how we can become digital and how we can develop the AI with other infrastructure.
So I think that's mine. Thank you very much.
>> MODERATOR: Thank you. And now I'll pass the floor to Alex Moltzau, he joined the European AI Office in the European Commission the day it went live as a policy officer national expert sent from the Norwegian agency. He coordinates work on regulatory AI and currently also a visiting policy fellow at the University of Cambridge.
>> ALEX MOLTZAU: Thank you so much. And it's a pleasure to be here today. And really great to add to the intervention of Ana and Jose. My name is Alex, as I said. And I also, you know, I think it's being here today is wonderful. As someone who is from Norway to the commission and see everyone come together. I think it's really bright community.
But this topic that we are discussing here is really close to my heart. So my background is in social data science. So when combines social concerns with data science methods. I'm in programming‑oriented but at the same time with inspiration from a lot of social science fields. But I also have a Masters in artificial intelligence for public services. And where Jose is placed, the conference of AI sustainability and I spoke though I have not spoken at the ones prior. I previously held a talk on climate crisis in 2020. So I think, like, for me, I just saw that, you know, I think we were seeing all the compute increasing. Infrastructure being built and with the consumption patterns, you know, and all the fields, it was kind of strange to think this is not going to be a problem.
Honestly, I think what we're dealing with here, you know, is something that is strange that we haven't seen much more clear. Because I think we want to deliver great services to our people. We want to also have amazing companies and compete in a friendly way. As much as possible. But at the same time we have a shared problem. And these are expressed through the Sustainable Development Goals. And I worked with AI policy full‑time for the last five years prior to joining the AI Office where I have worked now for one year. And before that, I worked with a nonprofit organization also called Young Sustainable Impact. So I was had a community of 11,000 people around the world. We worked and tried to think how can we bring forward new solutions and new companies to address the Sustainable Development Goals. But I think maybe we were a bit naive. Ha. But I think we have to be naive. I think we have to believe in that broader future. And, for sure, that is not to just senselessly use technology without any thoughts about the responsibility and without the context that we live in.
We live in a time where we have a climate crisis. We have live in a time where we have a plurality of different crisises we are facing. We can only face them in digital solidarity. I think what Ana said, the minerals is very clear. And I'm also glad to say where I'm working now in the European AI Office at the ground floor of the building, we have a large artwork. It's called "Anatomy of an AI System" it shows the value chains of the Alexa Echo and how it's linked together. It's an artwork created by Kate Crawford. So in a way, like, every single time we walk into the building, with we're looking a the artwork. I think what I like about the European Commission and what I like about the people that work there, they really care deeply about this. So I can tell you that, for sure, you know, it's not something that we want ignore. It's something that we want to commit towards.
Today I'm not talking behalf on the European Commission. I'm not presenting their official perspectives. I'm here as an individual. But I will tell you about a few things that we are working on. And one of them is also a collaboration with that we have rolled out to finance, also, collaboration on generative AI to get new perspective, solutions, companies in collaboration with Africa. There's a 5 million Euros there committed to this. I encourage anyone working here, in Europe or Africa, to kind of apply together for that. And the deadline is the 2nd of October this year.
So please consider seeing if there's any kind of good projects for collaboration on that. If you have read the EU AI Act. There's a commitment to a standardization request on energy reduction. And there's also a study on green AI running now internally in the Commission. I think this is also kind of, like, all though I would like to see we have done more, it's not like we are doing nothing. I'm happy to say. But I think what we have to do is to think about the rollout of the large‑scale policy making systems that we are rolling out now. And it's a lot. Invest AI was announced during the summit 200 million Euros. Investment is not a joke. You know, there's a significant investment there. We are rolling out the AI Factories. We have AI Cloud and Development Act now to try to think about this in more ways. There are, kind, a lot of moments to scale up this digital in Europe. But sovereignty doesn't that mean we should decide for a better future. If sovereignty means we can make those decisions, if sovereignty means we can kind of decide to do something that would be better for our citizen. Better for the population. And I would think that means that, also, that rollout has to be as responsible as possible. As sustainable as possible. As green as possible. And, of course, that's my personal opinion. And I look forward to listening to the others and discussing today.
>> MODERATOR: Thank you, Alex. Very interesting thing you said about we shouldn't use that without the context we live in.
Jose wants to say something.
>> MODERATOR: Thank you. I would like to respond to the first two speakers. I think that we already have lots of interesting topics for the Q & A. But I would like to introduce Pedro Ivo Ferraz da Silva. A career diplomat and coordinator for scientific and psychological affairs at the ministry of foreign affairs in Brazil. Also a member of the technology Executive Committee of the United Nations framework convention on climate change. And Brazil's vocal point to the intergovernmental panel on climate change. And Pedro, the floor is yours. I would like to say I think there's no one with more with the knowledge of what is going on in the discussions at the UNFCC and the intersection with technology. , Pedro, the floor is yours.
>> PEDRO IVO FERRAZ DA SILVA: After 10 years, I had the honour to organize IGF2015 in Brazil. And by that time, AI was actually emerging as a topic. And climate change and Sustainable Development Goals, in more general terms, were rather just sub topic, you know, in the context of the discussion. So I'm glad that, you know, after 10 years, things have evolved and we are here delving into interesting topics.
I greet you all. We have just concluded the June Climate Negotiations and AI was an important topic of discussion here. And, you know, tackling, of course, the benefits that can AI can bring to climate action and, also, of course, the footprint. Environmental footprints that was also part of the discussions.
So as you know, the world is facing, among many others, the challenge of accelerating digital transformation while staying within planetary boundaries. As I said, AI is both a powerful tool and a source of new tensions. It can be used in many ways. For example, model climate risks, forecast disasters, it can be used to optimize infrastructure for low‑carbon development. It can also deepen inequality, it can centralize control. And, you know, again exacerbate many environmental harms, if it is left unchecked. So the question, I mean it's not whether AI will be used or not. It is already being used. The real question is, you know, who decides how it is used. But what purpose and what cost. In this context, I think governments have a critical role. Not only as regulators, but also as stewards of public interest. And, also, as a driver of innovation and development. Governments must ensure AI governance frameworks are rooted in democratic values. It's important that AI is aligned with climate goals and also protect human rights. At the same time, I think the frameworks they must encourage innovation. And if we look at the climate innovation within the climate context, I think there is a dire need that AI is not only a driver for innovation but mitigation purposes, but also for adaptation and resilience in vulnerable communities. I think the discussion we're having here and I thank the partners for organizing the panel here at the IGF. I think it's timely as we look ahead to the heart of the Amazon. The Brazilian presidency of cop 30 has proposed a vision that is centered around the idea of [?] it's a bit difficult to pronounce, but, you know, it's means it is a collective and community‑driven effort to tackle the shared challenges. And it is a concept that reminds us permanently that climate action is not just about technology. But also about, you know, cooperation, participation, and then shared responsibilities. This also guides us how we approach the governance of AI. The current global landscape of AI, I think, reflects a profound symmetry. We have mentioned it here, AI has an enormous potential to support climate action, its development and deployment are dominated by a few countries and a few corporation, as mentioned by the previous panelist. So most of the world remains excluded from shaping these systems. And at the same time, the environmental footprint of AI is increasing. And here very important aspect. While transparency of AI is declining. I mean, there is a recent study from two weeks ago found 84% of widely‑used large‑language models provide, you know, no disclosure at all of their energy use or emissions.
You know, without better reporting, we cannot assess the actual trade‑offs and design. We cannot design informed policies, and we can also hold AI and related infrastructures accountable.
That's why inclusive international cooperation is essential. It must be accompanied by local empowerment. I refer, again, to another report. A report that was prepared Technology Innovation Report from this year titled "Inclusive AI for Development."
AI lays out, among other things, developing countries need to strengthen specifically three strategic capabilities in order to be able to shape AI. Skills, data, and infrastructure. So they turn these as leverage points that allow countries of the global south not only to access AI, but to really shape it in ways that must reflect local priorities, protect, of course, biodiversity, and protect natural resources, and advance climate justice. And, you know, this is not just about developing new technologies. This is also about ensuring that AI systems are embedded in institutions, practices, and values that are transparent, inclusive, and, also, of course, climate aligned.
As we look in the future, I think we should not reject ‑‑ or let's say we should reject, actually, the false binary that exists between national sovereignty and global cooperation. I think we need both of them to be rooted in equity and climate responsibility. And I think it's kind of conveys this and allow us to come forward.
So these are my initial remarks. I thank you all for, you know again, for the invitation, discussion, and looking forward to the Q & A. Thank you very much.
>> MODERATOR: Thank you very much, Pedro. Great talks. I'm really looking forward to the Q & A.
I'll introduce Yu Ping Chan who is with us on site, as well. Yu Ping Chan had digital partnerships and engagement at the U.N. DP. The United Nations Development Agency. Yu Ping Chan was a diplomat in the Singapore Foreign Service.
Welcome, Yu Ping Chan, and now you have the floor. Please.
>> YU PING CHAN: Thank you for having me here today. We are the develop wing of United Nations. We're in over 170 countries and territories around the world. Supporting governments through all phases of development, all aspects and so forth. The digital formatting at UNDP is extensive. We're in more than 130 countries. Achieving Sustainable Development Goals. It's interesting to be a part of the conversation. And hearing your thoughts about what is so critical in terms of this intersection between digital and the environment. I couldn't agree more with some of the areas Pedro highlighted. And we have been actually very privileged to work very closely with the Brazilian COP presidency in the lead up and thinking about how these issues intertwine. When he talks about the challenges around AI exclusion, AI inequality. This is the framing that UNDP is looking at. In terms of considering how the AI revolution is going to potentially leave behind even more countries in the world, and really exacerbate the divides between the global south and the north. When, for instance, projections show that only 10% of the economic value that will be generated by AI in 2030 will include the global south majority countries with the exception of China. We have situations where the AI future is going to be even more inequal than what we see today. When presently, for instance, over 95% of the top AI talent of the world is concentrated in six research universities, which are in the U.S. and China, basically. You see how we run this risk of having AI be, in some ways, the domain certain types of monopolies and developed in certain ways and not responding to the needs of local populations and the majority. So this is where UNDP has been looking how we strengthen the systems and inclusivity in domains. It's not just about AI. Even before we have AI, we need to have data. Before data, we need to have basic connectivity. Before connectivity, we have to think about infrastructure and energy. Which are challenges. It's not enough to just think of AI by itself. Right. You need to think about the entire developmental spectrum across all these issues. And tie digital and AI digital transformation itself as part of this holistic approach that goes beyond just one ministry, but thinks about the broader purchase of sustainability and inclusion and really digital transformation as part of the societal approaches as a whole.
For instance, we have initiated a lot of work around some of the areas that other panelists have highlighted. The gaps around skills, compute intelligence. Last week in Italy, we launched the AI hub for sustainable development with the Italian presidency. It's a product of the G7 presidency. Looking how we can support local AI ecosystems in Africa, strengthen AI innovation, and, also, partner with AI start‑ups in Africa to bring them to scale and to build the capacity within Africa. To be part of AI revolution.
We have also worked on various areas when it comes to digital and connectivity. As far as digital and environmental sustainability and climate issues. We have a digital for planner. Besides the fact that we have worked closely with the Brazilian presidency. We need the coalition on digital environmental sustainability with the international telecommunications union, the German ministry, the Kenyan government, and Civil Society organizations such as the International Science Council, Future of Earth and so forth to think about how what kind of thought leadership and global advocacy we need around this intersection of digital environmental sustainability. This is in addition to the work that is being done, as I mentioned, in UNDP's country offices around the world. We have, Wooed on national carbon registry systems, to our Digital Public Infrastructure for climate in countries like Libya, Costa Rica, Nigeria. I have a long list I could list. But suffice to say, there's a lot of information online about what UNDP is doing in the area of digital, environment, and climate. Around the world.
But all of this is not to say that it's enough. Because I think some of the other panelists have already talked about how we ares a spiring to something a lot greater than just these pieces. It's not enough to say we're doing these projects. We have to be thoughtful in how we roll out the projects and the big investments. And, actually, it's interesting that they invited me to be a part of the panel today. This actually came from another convening we did last year at the IGF in Riyadh. We were developing what we call the declaration responsible AI for the SDGs. This was launched two weeks ago at the Hamburg Sustainability Conference. We were developing multi stakeholder communities, governments, investment banks, and Civil Society community to come together to think about how in the use of AI we have to be responsible in how we deploy and use and design and use the AI. Precisely in the areas.
So we have already gotten over 50 stakeholders that have signed on to the Declaration. Which is the first multi stakeholder document in this particular space. We could encourage and welcome more organizations to sign up and make commitments in this regard. It's precisely that. How do you thoughtfully engage with AI and how do you commit to using AI responsibly in achievement of the Sustainable Development Goals and environmental sustainability, as well.
So, again, I look forward to hearing from all of you.
>> MODERATOR: Thank you. Now I pass the floor to a member of the Homeless Workers Movement.
>> ALEXANDERE BARBOSA: Thank you for the invitation. A member of the Homeless Workers Movement some of you may be asking what is Homeless Workers Movement. It's a movement in Brazil and was founded in 1997. There's a huge gap of housing in Brazil. Different statistics. Conditions of housing. The proper tools and instruments to deal with the issue. The people themselves are struggling and fighting. And just saying that because the same applies to technology and digital sovereignty. Our approach to digital sovereignty, what we call popular digital sovereignty. And I'm referring to the Latin American version of popular. It deals with the massive aspect of sovereignty.
And for us, it's mainly we've been doing the past five years. Is doing things that the state actually haven't provided to us so far. So, really, fighting for meaningful connectivity, digital literacy. Also, fighting for decent work.
And then we realized that what we've been doing in practice is somehow claimed by digital sovereignty. But for this specific panel, I think it's good to emphasize that in the first semester, Brazil is also chairing in the following week. Within the structure, there's the BRIGGs forum, in which Brazil also added its popular dimensions of the popular forum. And we also collaborated in digital sovereignty working group labeled as Workers movement which is another social movement. Really important in Brazil. Struggling for reform. And this work was really, really interesting. And you'll probably have access to the document in the following week. But we promoted this idea of people‑centred digital sovereignty. And we also outlined some guidelines. It's in consideration both people and nature and climate needs and so on.
There are other guidelines specifically to deal with AI development. I think it's really worth checking the document. What I mention here, the meaningful access, digital work, and so on. To highlight whenever we talk about AI sovereignty, we cannot restrict the conversation. As other panelists mentioned, I think, to computing power or to regulatory capacity or even data capacity or basic regulations. But also connectivity, electricity, access, digital literacy access. And, also, a transition to decent and better jobs in the AI era.
I think that's mainly the initial contributions that I put in place. If you have any other questions, feel free to reach us. If you are curious about what Social Movement is doing in regards to digital sovereignty, you can access our website. We provide it. And the moderators can share it. It's our approach to digital sovereignty. I think it's pretty much aligned with the sustainable vision of digital sovereignty.
And just to add this more critical aspect of sustainability here, right. We've been watching the agenda over sustainability the past 15 years. So eventually it's time to change alternatives. To development. Right. Especially in Latin America. Latin American environment, we have other agendas. It's aligned with the climate justice discussion. Thank you very much.
>> MODERATOR: Thank you. Now I would like to note if we have any questions on the floor. Please feel free to join the mic. Okay. We have some.
>> SPEAKER: Hello. Thank you. My name is Manuel. And I'm a member of Parliament from the Philippines. I'd like to address my question to our representative from the European Commission.
So in the Philippines, in our case, artificial intelligence because to develop the large language models, it deals with a lot of labour, especially, those in, like, call centres. Where the structure is like a call centre but it's the large‑language models. And the one thing that we want to ensure, also, for our citizens is how to not replicate, you know, all the exploited practices in labour and how that might extend to AI development. So since the continent of Europe is making some steps in AI. I would like to know if there are current laws or policy provisions that also touch into, like, protection of labour and workers. Thank you.
>> MODERATOR: Thank you. Just a reminder to the speakers, when answering the questions, just please also state your final remarks. Thanks.
>> ALEX MOLZAU: I'm here today in a personal capacity. I'm not presenting the official views of the European Commission. I would like to say that.
I have a bit of of a background, as well, as a Norwegian and as a country who cares a lot about labour legislation and about collaboration. I also always actively talk to unions when I travel back to Oslo. I think it's extremely important to think about the impact Onley boars and the impact on the way that we work. But, also, the way that we are affected. I think what you're saying is extremely interesting. Also, because, you know, what we're seeing is all the large language models, they require a lot of supervised machine learning. We have to kind of tag all the different algorithms and that requires a lot of human labour.
And I think I mean, part of the backdrop is, for example, Kenya, there were a lot of movements, as well, to kind of unionize. To kind of see is there any kind of way to increase our rights. Or, like, to increase the pay that we get for doing all of this work. You know, making sure that these models actually work in practice.
I think your question is extremely timely. And in the European Union, we still have fairly strong labour legislation rights. I think it's saying that AI does not operate in a vacuum. We have existing laws. We have existing values. So let's make sure that those existing laws and the values that we have really are ways that we act in the field of AI. Because I don't think it is right now. I think there is such a long way to go. So I just wanted to thank you for that. And in a sense, like, how do we have ways to handle it. In the field of AI is something that I have seen the European Commission is working on. Currently. But I don't think I can give you kind of a definite answer on how to protect overall laborers. But it is AI Act does include, like, kind of concerns regarding employment, as well. And risk categories. Right. So in this way, at least in our region, it has a consequence.
So with that, I guess that's my final comments. So I pass it to other questions and speakers.
>> MODERATOR: Thank you. We have another question.
>> SPEAKER: Edmund Cheung. Thank you for linking it to sovereignty and digital sovereignty. I think many of the panelists touched on this. And I think Pedro mentioned about the false dichotomy between national digital sovereignty and the global cooperation, especially global public interest, in my mind. And one of the things I think, perhaps, I'd like to hear from the panel, but, also, to really think about the personal digital sovereignty, as well.
I think ‑‑ sorry, I forgot the name of the person, mentioned about a popular sovereignty. Because it's the personal digital sovereignty and I think Yu Ping mentioned about data coming before AI. The personal digital sovereignty is an important part of, you know, really safe guarding AI that is people centric. Like for the end user, ultimately. It's not even a dichotomy. I think coming ‑‑ in order to bring it to full circle, it's both, you know, it's not only both. It's the personal digital sovereignty, national digital sovereignty, and global public interest, which brings it into the full loop.
So, yeah. That's my contribution.
>> MODERATOR: Okay. I'll take the next question. And then we'll go to the answers. Yes. You can ...
>> SPEAKER: Hello. Thank you for this amazing panel. I come from Peru, my name is Rosie.
I thank this panel also involving things as, you know, digital literacy and tech appropriation of the technology.
So I would like to ask about environmental sustainability. Because at least in my country, there's, like, a race in order to regulate AI. And we are the first country in our region that has AI law. And we are trying to also approve a regulation in this. But there's a huge environmental view missing. And we also know that this is also happening in the Digital Public Infrastructure, in general. So I would like to ask how do you think that, for instance, we as a Civil Society and organizations also with grassroots legislations can advocate about that. Without getting into the green washing approach that our colleague from Brazil was sharing with us. Thank you.
>> MODERATOR: Thank you. We have less than five minutes. Please, speakers, feel free to answer. Rapidly. Thank you.
>> MODERATOR: Yeah.
>> ANA VALDIVIA: Thank you. The environmental impacts of data centres there. I think one solution would be to talk to other of your colleagues. There are a lot of sociomovements in Latin America. And I talk about Mexico. There are other movements. I can put you in touch with them. They have been advocating for more transparency. Currently, for instance, in Mexico. We don't know how much water and energy data centres are using. Chile has a platform where the cities have a environmental report. And by the data centre industry, the Chilean government decided to have a platform. So data centres have to report their environmental impacts.
That I mentioned in my intervention, I would be happy to put you in touch and contact with other organizations in Latin America. And thank you for your question.
>> MODERATOR: Alexandere.
>> ALEXANDRE BARBOSA: I would like to react to the first question, as well. And emphasize what we're dealing at this moment, in the contemporary con juncture all the AI discussions, AI sovereignty, regulation, AI and environmental sustainability, it has to do with politics. Right. And it pretty much applies for organizing social movements, especially popular movements, grassroots movements to deal with environmental concerns. We've seen the digital struggle, again, deforestzation in the Amazon region.
It's much, much more difficult than any specific. Thank you very much for the opportunity.
>> MODERATOR: Yu Ping Chan.
>> YU PING CHAN: To add to the dimension. It's not just politics. It's the big tech. To the first question, this fact that the labour, you know, there's a need for labour regulation. It's a question of who owns the product around labour. Because the LLMs are going to be owned by big tech companies and not freely available to the populations putting in the data or the efforts to create them.
There's all the issues that are tied into technology. But which requires, I think, I like the aspect about the mobilisation of concerned individuals, groups and so forth, that share experiences and thoughts about how to respond to this. That question about what should we do. And I want to link what you said. Maybe perhaps sometimes we are naive in what we try to achieve. I don't think it's the case. I think the more we get together, share information, Best Practices of how things have been successfully changed or advocated for and clear many our messaging in what we expect from government, policy makers, international organizations, the better, hopefully, the outcomes can be. At the very least, we need to be part of the conversation. Thankfully we're one of these opportunities to have the type of conversation. And translate them into policy and hopefully impact, as well. So my last closing message, I think, to continue to speak up. To be involved. And think about how we collectively can make those changes that we want to see.
>> MODERATOR: Pedro?
>> PEDRO IVO FERRAZ DA SILVA: I think one conclusion I draw is we need to, perhaps, to move away from the narratives that comes, especially from developed countries that we live in a moment of, for example, triple planetary prices. You know, a view that tries to limit the problems that, you know, we face in the world. Actually, I would rather say we live in a moment of crisis that contains, of course, the environmental crisis, but, also, the social crisis with the diminishing labour rights. With, also, while people still fight for, you know, to overcome the challenges of hunger and poverty. So I think that's ‑‑ and, of course, the crisis related to digital rights, which, actually, is a crisis that has been very central to the debate at IGF2015, that I was participating.
So I think we need to, you know, to tackle all of these crisis in a coherent way. And I think encouraging social movements and grassroots movements is fundamental. I think technology can play a very important role here by leveraging those movements. So perhaps that is the final message here. Consider we are facing various crisis at the moment and addressing them in a coherent way.
>> MODERATOR: Thank you for the great discussion. Can we please take a picture. Can we put the speakers on the screen. Please. .
>> MODERATOR: Good morning distinguished panelists and global citizen. It's an honour to welcome you to the defining moment here in Lillestrom, and also the city of the Nobel Peace Prize.
Today we extend that reflection into cyberspace. We are privileged to announce the Cyber Peace Index. It measures security in the digital world but also to inspire cyber peace. A digital future built on rights and responsibilities. It's more an index. It's an evolving dynamic framework.
The index is built through a multi stakeholder approach involving environmental regulators, technology platform, Civil Society, and digital rights advocates, academia, the Technical Community, and the people whose lives are shared by these systems.
The cyber peace index aims to inform, empower, and activate. It challenges platforms to have better guardrails. It equips communities. And it helps us all navigate towards a safe, inclusive, ecosystem. The index aligns with global compact like WSIS plus 20 and the UN SGDs. It aims to set a new normative production. That cyber peace is essential. From the country and city that celebrates peace. We begin a new journey. Cyber trust becomes a global public good. Thank you for being part of this historic moment. With this, in fact, we would like to introduce the panel of speakers who are present here.
We have been joined by Mr. Suresh Yadav. A leader on AI digital transformation we have at environmental advisory committee for ICANN. Ana, an associate professor. And senior legal manager for the European centre for nonprofit. The session will be moderated by me. I'm the Founder of Cyber Peace. And Dr. Subi Chaturvedi a global SVP at Public Policy Offer InMObi. Over to You.
>> SUBI CHATURVEDI: Thank You So Much. Huge Congratulations Are in Order. It's a Very, Very Big Day. And a Historic Milestone. I'm Really, Really Happy That We're Celebrating the Launch of the Cyber Peace Index. Cyber Is a World That Comes First to Us That Are Digital Leaders and peace is something we've been working towards. So a huge round of applause is in order. For months and months of hard work and the oil we've been burning make sure that multi stakeholders. And able to create something we want to take for all future generations and something that we cherish, which is the internet.
I'm happy that we have a room that is truly representative of diversity and truly inclusive. It's so good to see friends on the panel today. And, also, for me, IGF will be my home. A member of the United Nations Multi Stakeholder Advisory Committee. Those were good times and moments we have cherish.
To set the context, first, why is it that this is so important. I think it's critical to remember that the malicious cyber attack came in 1988. And it was designed by university graduate. And it was intended only as an experiment to measure the size of the internet. But inadvertentty caused disruption. It affected 10% of the internet at the time. Systems slowed down or systems crashed. Today as we have this panel, at the very, very amazing host country. Cyber is the a a cross roads. We're hoping to secure our digital future in 2025. That's the theme this panel is exploring. When we look at 2025, cyber warfare is no longer an emerging threat. This is in the launch of the index is very important. It is today a global reality. I lead the global charter at INMOBI. We have created devices. We have ensured we're enabling even the last mile for connectivity. And we are bridging digital divide. Ensuring that the internet is not just something that you come and experience but create it as as safe space. So global realities are shifting in the current geopolitical scenario. Both aspects of geopolitics, development as well as human security become paramount. And global governments are facing this question. So, therefore, very, very important it becomes rule of the Technical Community, academia, think tanks. With over 50 cyber attacks per sec. Now they're targeting critical infrastructure worldwide. And the cost of cyber crime has surpassed over $10.5 trillion. So the need for holding institutions accountable and for also constantly building capacity to make sure that regulators, as well as policy makers are playing catch up. This has never been more urgent. It couldn't have been more timely. Therefore, as digital technologies continue to expand into every facet of life, from AI‑powered public services to quantum systems. We can foster. We can foster inclusivity. And stability instead of ensuring that we're all dealing with conflict.
So this year we will spotlight the intersections between technological innovation, cyber conflict, and the global quest for a secure and resilient digital future. Therefore, in this session, we aim to provide a road map for action by government platforms, the private sector, and Civil Society alike.
So the numbers we've talked about, one more figure that needs to be highlighted is 45% of all breaches now involve AI generated phishing and social engineering. And there is a sharp rise that is driven by generative content. And, therefore, to make sure that the internet I want to pay attribute to one of the fathers of the internet who has been a mentor and someone I look up to. The fact that we have to insist on upholding the core values of the internet. Which are interoperability, upholding of human rights, ensuring that innovation can thrive. Therefore, believe is at the heart of everything that we hold dear. It's very, very critical. So human error remains one of the weakest links.
So it has to be a key pillar of what we're talking about. And new conflicts are increasingly talking about cyber attacks, which are being used as precursors or amplifiers of warfare. There is impressing questions about international humanity law in cyberspace. And, therefore, spotlighting national security and sovereignty is very, very critical. Part of modern warfare. Being used to disable critical infrastructure like hospitals and defense networks. In a world of acementic threats, even nonstate actors can launch attacks which are disrupting nations.
You will always make sure that national sovereignty and security will take precedence. We have to ensure all of us come together and all of us work together to be able to create a multi stakeholder environment where freedom of speech and expression are still held dear. And able to balance citizen rights along with cybersecurity. Maintaining cyber peace becomes essential and preserve sovereignty.
Disinformation can exacerbate propaganda. Cyber peace is, I think, clearly the answer. We need what we're doing together in this historic moment is ever more critical. So securing information ecosystems, which can prevent manipulation during elections, protests, and global negotiations are important. The other pillar is critical infrastructure vulnerabilities. So transportation, banking, health care, water systems with, and nuclear facilities. All of them are digitally interconnected today. It highly explores and something that the Index deals with. As well as global independence and economic stability. I think enough and more needs to be said about protection of civil and human rights. So conflicts related to cyber attacks are often targeting civilians. They're destructing access to health care, education, and communication. And one of the ideas we want to highlight today as we take the discussion further is still there is a gap. A lot of, you know, has been made to clear global norms. We need more guidelines. We also need 40,000 feet above the ground principles.
One like nuclear or chemical weapons. So in the absence of clear rules, peace‑building mechanisms like multilateral cooperation. It's going to be ever more critical. And, therefore, we see this as an enabler for peace‑building and dialogue. So technology can also be used to monitor cease‑fires, enable humanitarian aid and facilitate cross‑border dialogue. And ensuring peaceful cyberspace that allows digital tools. That allows amazing minds to flourish and create a resilient ecosystem. So they serve as instruments of peace and not terror.
And just to end, I think it's important to highlight cyber peace as a growing digital power and member of forums like the G20 for the U.N. And there's been championing responsible cyber behavior. They're going to be the host in February 2026 of the AI Impact Summit.
I think this is where technology for good. And initiatives and leadership in cyber diplomacy, Digital Public Infrastructure, capacity building, as well as a moral ecosystem that is built on frugal innovation can be lessons we can cherish and celebrate. And many congratulations, I think, to all a of us, again, for being part of of the historic milestone. Over to you, Vineet. I would love to hear more.
>> MODERATOR: Thank you for setting the context. Before I move to our first speaker, Mr. Suresh Yadav, Senior Director for AI for Trade, Oceans and Natural Resources. I've been part of the AI consortium with him. He's been one of the key persons on leading the consortium. And key activities that government Secretariats had taken. Before I head to you, Suresh Yadav, I request the team to showcase the presentation. I want to show to everyone the metrics we have factored in for the index. We have compared around 10 index data. And the oxford cyber crime index. And the cyber peace index is slightly different. The rising digital threats. The digital world faces increasing threats like was highlighted. The issue of AI generative misinformation, attacks on targeting nations, and individuals. And threat limitations of the existing indexes, like, most emphasis compliance and cyber policies. Overlooking real time user safety like trust and safety of the end users are something that we find missing. And there is a need for a new framework of inclusive indexes. Which is essential to measure digital trust of harm and resilience dynamically. So keeping that in mind, this is how the cyber peace index has come into play. Which is citizens‑centric. It focuses on the issues that I just highlighted.
We are aiming to achieve real‑time capabilities. These are the 10 pillars or what we call the 10 majors of the index.
These are some of the samples trying to build the dash board. This is sample. It's not actual data but this is how the index will appear country to country comparison of a score that will get highlighted. The chart. And all these features are getting incorporated. We have done a study on the indexes. Maybe we have a shortage of time. I'll leave it for later. We'll put it on the website for everybody to see. And, also, maybe at the end, we'll leave QR code so people can be a part, join the link as a Google form. They can be a part of the Global Advisory Council and we're open for suggestions and feedback as the index continues to evolve.
So these are some of the comparisons we have made. Which will be available on the website. The Eu cybersecurity index. We have also seen a global terrorism index. The cyber peace index. And the future vision of the index is to expand real‑time analytics to capture emerging digital threats and user experiences worldwide. Inclusivity, develop specialized quantum cybersecurity, emerging technologies, and enhance for policy makers and researchers.
With this now, in fact, I'm just sharing the link for anyone in the community. I think it's a multi stakeholder approach. So industry, academia, Civil Society, technical groups, and also citizen can join the advisory board by scanning the link. We'll share the link in the chat for people to join. And with that, now, I'd like to head to our speaker, Suresh Yadav.
>> SURESH YADAV: I hope you can hear me. Thank you very much. And good evening. Thank you, Vineet to me to share some of my thoughts.
First of all, a big congratulations to you and your entire team for this very innovative tool. You are trying to bring together all the relevant indexes around the world and create a new index, which really takes into account the peace work and the SDG work. It's very innovative thinking. So huge congratulations.
And, also, very thoughtful to host this launch event in city of Peace. Where the peace resonates in each and every part of that city. So that's very powerful. Thank you for that.
I just wanted to highlight that the world economy, which is around $210 trillion at the moment. Expected to be around $340 trillion by 2030. And, also, the artificial intelligence has been mentioned that is going to contribute around $15 trillion from 2023 to 2030. And AI is going to accelerate further this whole process.
But at the same time if you look at the global cyber cost to the economy, it's estimated to be around $10.5 trillion in 2025. This was, I mean, it's the third largest economy in terms of size. If you look at the USA and China. And this economy was around $2.1 trillion in 2015. It was $6 trillion in 2021. And it has been growing in a rapid space of 15% growth. The fastest growing segment, if you've seen in terms of global economy.
And what does that mean? It means that there are a lot of people who are making money on the name of the committing crimes and the fraud and the scams. Economies are losing. People are losing.
It means that the countries, the society, and the people are not able to respond to themselves. And this is where the market has failed. This is where you need investment from the different areas. To guide the investment in a particular direction, you need certain parameters. Where you know which direction to go for getting the investment. So I see this great work done by the Cyber Peace in terms of the designing, in terms of directing, in terms of the investors. Within the country and outside the country that where you need to put your money. I see from a different perspective this will give a lot of food for thought to the various investors and people to see and identify the country. I see it as a form of cyber diagnostic investment opportunities in these directions. Apart from bringing the country on the map on this cyber issue.
All those parameters which has been mentioned. And, also, linking it up with the SDGs, I think is a great thing. As we know, we have been lagging on the various SDGs. There's already what you do to accelerate. I think this is an added tool in the SDGs for the global society to look into the whole process.
So Vineet, once again, I have to go for another meeting. I just left a meeting to join you. Once again, congratulations to you and your entire team for the innovative index. I'm sure that the entire work you have done will make the society, make the world, make the countries a much safer space. Make the internet a safer space. Particularly when the speed and the quantum of the cyber attacks are increasing. And I would point out that actors do challenge or destroy or damage largely to the state economy and government economy. So that's where I think if we invite a lot of attention, if we invite a lot of interest in the various forums, the comments, Civil Society, private sector, Academies, and research institutes who will take this index as a benchmark in carving out the further activities. So thank you very much. And I wish you all the best in this endeavor!
>> MODERATOR: Thank you. Thank you for taking the time to join the event. Moving from online to offline. Now I would like to invite my next speaker, Nicholas, Nicholas to you.
>> SPEAKER: Thank you. Thank you so much. Thank you for the invitation. My name is Nicholas. I'm the chair at ICANN. An honour it for many e to stand before you in the milestone 20th IGF in Oslo. Where collaboration and innovation define our shared vision for digital governance. Today, as we launch, you know, the cyberspace index, which was already explained, we take a critical step toward quantifying what was once abstract. The stability, security, and inclusivity of our digital ecosystems.
So why this index matters for governments in general. So whether it's shaping national policies or negotiating global frameworks you know, we need actionable metrics to guide these issues. And I'm speaking as a data scientist. Right.
You know, this index, the cyberspace index provides exactly that. A compass to begin complex cyber landscapes. Basically align with the IGF theme. Building governance together. By offering a transparent, multi stakeholder tool to assess risks like cyber conflict, as already mentioned, digital divides, and threats to critical infrastructure and many other things. I don't need to get into the details.
For instance, the just to give some more color. The index can spotlight disparities in digital resilience. Such as, for example, Africa's 38% internet connectivity gap. You know, or the gender divide leaving more than 189 million more men online than women globally. By measuring these gaps, you know, governments can target investments and policies more effectively ensuring, you know, basically that no one is left behind in our digital future, so to say.
There's another thing I would like to point out, which is the let's say the open‑source imperative. You know, the choice to develop this index using open‑source software is both strategic and symbolic. Open‑source embodies the IGF's spirit of collaboration. Allowing governments, Civil Society, and technologies to scrutinize, adapt, and improve the tool collectively. As we've seen with initiatives like open SSF you know, open‑source security tools you know, basically thrive when communities unite to address vulnerabilities and share Best Practices.
Moreover, transparency is, you know, in the index is methodology. Again, enabled by open‑source. Builds trust. Very important concept, in my opinion. Just as Norway's digital emblem initiative protects humanitarian infrastructure through open standards. The cyberspace index can become a global public good free from proprietary constraints, or geopolitical silos. So, you know finally you know, a call to action for everybody for not only governments but Civil Society, academia, and so on. So I would just say that, you know, we should commit to three principles. The first one to adopt the index to inform national cyber strategies and international cooperation. In the second place, contribute, you know, to its open‑source framework ensuring it evolves we merging threats like AI‑driven disinformation. Just to give an example. And number three, to champion inclusivity ensuring the index reflects, you know, the needs of nations. Especially those most all vulnerable to cyber instability, so to say.
In closing, the cyberspace index isn't just a metric. It's a manifesto for collective actions. So as shared, I would encourage, I would urge governments to embrace this tool. Not as a, you know, a static report card, you know, but as a living platform for progress. Together, you know, we can turn data into dialogue and dialogue into lasting cyber peace, which is the main point for this conference. So thank you so much.
>> MODERATOR: Thank you. And you made a very important point. It's not a metric. But it's a manifesto for a collective action. And I'm sure this is how the index as it grows and evolves and we'll see into action.
With that, I move on to the next speaker, Ana, associate professor in Russia. Anna, can you see us?
>> ANNA STYNIK: Can you hear me?
Yes. Good afternoon everyone. Thank you for the opportunity to share my reflections on the launch of the Cyber Peace Index it's timely and ambitious initiative. Especially as digital threats continue to multiply.
So, first of all, let me sincerely congratulate my colleagues on an excellent and thoughtful presentation.
What I personally find most important and even inspiring is that this index doesn't follow the usual, like, hard power logic. It brings a new lens, digital peace and societal well being. That is a very different citizen‑centred vision. And that's exactly, I think, what the world needs right now. So today humanity finds itself at a turning point. Of course, because of the rapid development of AI. It's bringing growing number of complex interconnected threats. And societies are struggling to adapt fast enough. While states are, like, increasingly thrown into competitive race to develop and deploy AI technologies.
And in such an environment we all live in, reaching consensus on global rules and safe guards becomes extremely difficult task. And against this backdrop, the initiative presented today offers a much‑needed perspective. So this starts not only ‑‑ not from the top‑down but bottom‑up. Instead of focusing solely on state power or institutional frameworks, it seeks to understand the shared digital experience of ordinary people across the world.
And perhaps by identifying common challenges faced by citizens in different countries, we can begin to uncover areas a of general consensus. Areas where international agreement is urgently needed. This is why it's particularly meaningful this initiative is being launched under the offices of the United Nations. The only universal platform where such inclusive long‑term level frameworks for peace and governance can be developed. In this time of fragmentation and distrust, the Cyber Peace Index may help us reach a new global dialogue. At the same time we have to acknowledge the challenges. How do we measure digital peace? Like, what does peace‑centric mean in practice? The framework presented here with the 10 pillars is strong and visionary. But the key issues measurement logic, I think would be the future. Like, for example, how to ‑‑ how we quantify psychological resilience. And how do we assign weights across 10 very different domains. And how can we ensure comparability across nations. Especially those with limited open data. Or low transparency. And I believe one of the great contributions on the CPI is that as goes beyond already what we have. Like the CPI offers something broader. It's kind of umbrella index. And so it is just important to gradually move from a broad vision toward greater quantitative clarity. And I think clear identification of data sources will play a key role in strengthening credibility of the index.
National efforts can provide valuable input, but they sometimes may reflect uneven levels of transparency or consistency. And in this regard, citizen level surveys could offer an important complement. Helping to reflect public perceptions of safety in a more grounded and inclusive place.
So partnerships would serve institutions, including local organizations. I think it could support the development of reliable indicators. And collaborative international research would also add depths and comparability. Especially as a joint process rather than reliance on external data sets alone.
So, last but not least, it's also important, I think, to reflect on how to ensure broad global relevance. So particularly for countries of the global south. So factors such as digital divides, disparities in AI development deserve careful consideration. So dynamics can help avoid reinforcing the various measures that the initiatives seeks to address.
So dear colleagues, to conclude, I believe this index has revolutionary potential. It invites us to ask how to protect people together. Their rights, safety, peace of mind. That, in my view, is the future we should build together. And thank you. I look forward to collaborating with you on making this vision measurable and shared reality. Thank you.
>> MODERATOR: Thank you. Thank you so much, Anna. And thank you for your remarks. We have around seven minutes left. We have a few voices to be heard. So now the senior legal manager to share his views. Over to you.
>> SPEAKER: Yeah, thank you so much. And I'll keep it short. I know we're running out of time. Congratulations to the index. I haven't had time to review it. We look at how AI impacts human rights. I'm based in San Francisco. I'll share a few remarks on how AI and impactive AI does often exclude the global majority and the types of things we'd like to see.
So one thing I often say when we think about AI governance is that algorithmic driven systems rarely warrant a new approach. The new thing is the scale and speed at which it operates.
And, really, perpetuating and amplifying existing human rights risks. With already marginalized groups such in the global majority. One thing we want to see when we talk about AI governance is real‑world harm today. As opposed to the arguably overblown concerns of risk and AI misalignment. Which we hear about in silicon valley. And in some European or U.S. policy discussions. One of the favourite things I love to say about AI is that it it neither artificial nor intelligent. What I mean, it requires a lot of computing power. It's embedded in hardware of physical infrastructure and the global economy. And I think that's where we see one of the biggest issues today of the global majority. Is that digitalization often relies on the global north. There's disproportionate data centres are based in Europe and in the U.S. and in China. There's just been a "New York Times" article a few days ago on the global AI divide. I will say that the data centres have their own negative repercussions, including environmental. I'm not sure if global majority wants more data centres. But that to say that data centres and hardwares such as microchips, compute power, software, the financing, really where money goes to these technologies, the data, skills, education are primarily in Silicon Valley and the global north. How do we redistribute to the global majority.
Something to consider, I don't know if the index looks into this. Ghost labour. Folks labeling the data, moderating content for algorithmic moderation systems are often in the global majority further creating that gap between resources and power and money.
So that concentration of power really does lie within a handful of organizations. And when we think about large language models or foundation models, it's even more the case. You see a handful of companies building the technologies and holding that power.
However, one thing that I've been excited about to see in the global majority is that there are alternative approaches LLMs are emerging. We've seen in the work community‑led initiatives in the global majority that focus on public interest driven AI development. And LNP developers putting together data sets in local languages and Arabic and really around the world in Peru. And that highlights the potential for more culturally informed algorithmic systems. These model, though smaller in scale than the Chat GPT or Gemini demonstrate comparable performance such as translation. And really shows us the potential for more rights‑based participatory AI development that does not rely on AI and LLM providers.
Something I wanted to flag is language inequities in AI development. Most models are trained on data rooted in colonial and imperialist dynamics. It leads to discriminatory outcomes. Especially for marginalized groups. We've seen some debasing efforts to debase the data. But these have shown limited effectiveness. And still significant performance gaps that persist between dominant colonial languages, like French, Spanish, German, and more underrepresented languages especially dialects or countries that do not have the enough data that is used to train these systems.
Something that we often forget, as well, is in addition to the data, cultural nuance is often lost when building the systems. And then once they are developed, there's poor benchmarking that prevents AI developers from adequately identifying and addressing discriminatory impacts.
Just a few other things to consider is that there's a lot of relationships between AI developers and governments. Including support for governments with authoritarian practices and the impacts it has to human rights communities and marginalized groups around the world.
And there is conversation around developing AI, often comes from a techno solutionist approach. And these are often inadequate products and solution that do not fit the local or regional context that are not built with meaningful participation from communities and local voices are excluded.
And that's both the case in the development of the technologies and then the validation. One thing we see, for example, when we think about foundation models, in particular, is reinforced learning. Human feedback. I won't go into details about that. But those are mostly ‑‑ that's mostly done in Silicon Valley and definitely excludes groups from the global majority.
Before I share a few thoughts on AI ‑‑
>> MODERATOR: Sorry. We're out of time. A quick remark.
>> SPEAKER: Yeah. We have 20 seconds. So, no. Thank you for inviting and we're supportive. And would like to lean in and participate in the development on the index. I think sometimes we joke about world peace. But I think that it is in this time of real challenges we have to insist on peace as an option as a preferred option.
I want to leave with just quickly three suggestions. One, to reframe the digital security concept into digital resilience. That is, I think, very important. Earlier in the week, we talked about this, as well. And in the index, I think that will be useful because resilience and not retaliation is about peace. And that's where we need to be.
We also shouldn't shy away from geopolitics. And I think that's part of what the index looks into, as well. And that leads to the question of digital sovereignty. And the sovereignty, in that sense, it's not only the national sovereignty. I think it was mentioned earlier, it's about balancing personal digital rights and personal digital sovereignty against or together with national digital sovereignty that is important. Recently our programme Net Mission actually Asian Youth are calling for reclaiming agency over the data. And that is a part of the index, I think, that's important. Which comes to agreement with Nico about open‑source and transparency. I want to echo what was said in the opening about your digital self belonging to you. That, I believe, is digital sovereignty. And personal digital sovereignty.
Finally, the third thing about the multilingual internet which we earlier talked about. I think we need to move away from an English‑first mentality into a multilingual by design approach. And how the index would take that into consideration in terms of the infrastructure, in terms of the resilience is something that I think is important. So, again, congratulations. And look forward to participating in this index.
>> MODERATOR: Thank you so much. Thank you to the speakers. And thank you to the IGF Secretariat for allowing us a few minutes. We'll continue the conversations and questions offline. Looking forward to the conversations.
>> MODERATOR: Thank you.
>> MODERATOR: Thank you. Vineet, can you hear us?