The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> WOLFGANG KLEINWACHTER: Ladies and Gentlemen, welcome to the session on autonomous weapons system. The identified this issue of autonomous weapons system as internet related public policy issue.
20 years ago when WSIS started nobody talked about the military domain. But like all technologies in the 2000 years of history of mankind, although the internet and other new achievements in technology has been pulled in the military domain.
By the way the internet started 80 years ago 60 years ago at a project on the US department of defence. So it was not so far away from the military domain. But in the last couple of years we have seen that here in a new issue is emerging. We have several negotiations around this. There is a proposal for secretary general of the United Nations to agree on a document in the year 2026.
We have a group of governmental experts, which is negotiating this push for discussion in general assembly of the United Nations. So it is very natural that we discuss this also in the framework of Internet Governance Forum. We did it already last year in Riyadh. And this is more or less a continuation. And the aim of this workshop is to do more outreach. So that more people are aware what's going on in this field.
We will not continue the negotiation here's on this table. But we want to collect various perspectives. We have an excellent panel which gives you all the various perspectives. We will here hear in a minute Vint Cerf, the father of the internet and Chair of the leadership panel of the IGF who will give his perspective.
And then we will hear from ambassador Pehringer from government of Austria about the Austrian initiative. And then we will have two different perspectives and the industry and from the technical community. Benjamin will give the industry perspective. And Anja, unfortunately Anja has broken her leg. So she is only online with us. Will give us the perspective from the IEEE, technical community. And then the comments from the Global South with Olga, you know, from China, with professor Peixi. And if Gerald from Norway. And Chris Painter on line also from Washington. And now I hope. Vint is online.
>> VINT CERF: It is Vint. And unfortunately I'm not able to open my camera. Unless someone on the control can do that for me.
But here we go. Maybe that just worked.
Maybe not.
I can't enable the camera for some reason.
But if you will forgive me, I'll speak anyway. Because we only have a finite amount of time.
First of all it is important for all of us to remember that computers have been involved in weapons system, literally from their earliest creation. The ENIAC computer was in the United States and in operation around 1945. And its first use was to calculate ballistics tables for large scale guns.
But I also draw attention to today's environment, where hypersonic weapons are becoming available. Satellites are operating at 17,000 an miles an hour. There are complex multi drone attacks that are threatening. There are fire and forget and over the horizon weapons systems. There is a dire need for situational awareness in complex environments. We are looking at interest in creating digital emblems, which are the analogue of the kinds of red crosses that you see on buildings and vehicles.
And finally, we need to remember that autonomous weapons are not just kinetic weapons that. Presumably cyber targeting with within soap scope of our discussion. All the o say that the topic is timely and important. It is also just saying that the military is faced with a fairly serious problem of responding to high velocity attack. And large quantity attack. And they need computing to help them.
So I expect that part of our discussion will be whether or not the targeting is automatic. Or is in fact under some kind of human control.
And this is not a trivial question to answer especially if the attacks are large in scale and scope. But I think that probably is part of our central debate. How do we recognise the utility of computer‑based systems in dealing with conflict if we're forced into that? And how do we do it in a way that does not get out of control?
And so I think the discussion today will almost certainly centre on how do we maintain some human ability to limit the choice of targeting in an automated system so that it only goes after targets that we believe are legitimate?
And I don't think that is an easy question to answer. But it is clearly one that this discussion needs to shed light on.
So I'll stop there. But I think thank you very much for the opportunity to begin. And apologies that I'm imvisible. I will try to rectify that during the course of the call. I can stay on for about an half hour.
>> WOLFGANG KLEINWACHTER: Thank you Vint very much. And I think you already put the finger on the crucial point. It is the human control. And what is justified? And what is not justified?
And as we probably will also discuss, you know, a lot of arguments on how to (?) weapon system also part of the (?). Means to promote peace not war. Interesting. And let's wait and see what the various panelists can say. Now I move to ambassador Pehringer because Austria started two years ago in the United Nations the discussion. And secretary generally presented report last year which has collected statements from lot of governments and NGOs. Ambassador, probably give us a short overview of what Austria has done and is planning and what do we expect from the next years.
>> STEFAN PEHRINGER: Thank you very much professor. I'm pleased to speak with remarks from an Austrian perspective. At the outset, allow me to thank our moderator, professor Kleinwachter and our distinguished speakers here in the room and joining us like Vint Cerf joining us online for contributing to this timely and important conversation.
Ladies and gentlemen, like all transformative technologies, the application of artificial intelligence in the military domain is advancing rapidly. This developments promise the make tasks faster, easier and more accessible. Yet they demand robust guard rails and limitations in the civilian as well as in the military sector, to ensure that AI is used in human rights based, human centred, ethical and responsible manner.
While the civilian domain is increasingly governed by regulatory framework, the military and defence sectors are lagging behind. Austria therefore supports ongoing international efforts to promote responsible military use of AI. These include the reaim initiative by the Netherlands and South Korea and the US political declaration on responsible military use of AI and autonomy.
Today we want to focus on one of the most critical and sensitive issues in this broad field, that Austria is particularly engaged on. Autonomous weapons systems. Systems that can select and apply force to targets without further human intervention. AWS raise fundamental legal and ethical concerns.
These include the necessity for meaningful human control to ensure proportionality and distinction. The need for predictability and accountability, the protection of the right to life and other human rights. And the principle of human dignity.
There are also serious risks from a security perspective. The risk of proliferation, including to non‑state actors and the destabilising autonomy arms race. These topics will be explored further by our expert panel.
In light of these concerns, Austria has taken the leading role as mentioned in advancing international regulations on AWS. In April 2024 Austria hosted the Vienna conference humanity at the crossroads, to examine ethical, legal and security implications of AWS and to build momentum for international regulation.
Austria strongly supports the joint call by UN Secretary General and ECRC present to conclude negotiations on legally binding instrument on AWS by 2026.
Over the past decade, discussions have taken place notably within the group of governmental experts in Geneva at the human rights council. Where growing majority of states agree on the need for international regulation. Including prohibitions and restrictions. However moving from discussion to negotiations on a regulatory instrument remains difficult.
Geopolitical tension, mistrust, and reticence to regulate this fast‑paced technologies are slowing progress. Even as the window of preventive regulation is closing rapidly.
In response Austria and cross‑regional group of states so far introduced two resolutions on AWS and the UN General Assembly.
The first in 2023 mandate ad UN Secretary General report and its 2024 follow‑up resolutions supported by 166 member states established open informal consultations in New York to address so far underdeveloped legal, technological, security, and also ethical aspects of AWS that we want to put a particular focus on today.
These consultations complemented the Geneva‑based efforts and demonstrated to be very (?) and also informative for delegations that have not yet had the chance to participate actively in this debate.
Further informal negotiations included not only states but also stakeholders from industry, academia, civil society, and tech sector.
From Austria's point of view the global discourse on AWS must extend beyond diplomat and military experts. The implications of AWS affect humanity at large, from a moral, ethical and legal point of view as well as perspective of sustainable development. These issues concern all regions and all people.
We therefore advocate that multistakeholder approach, contributions from science, academia, industry, tech sector, parliamentarians, and civil society are essential to ensure holessic and inclusive debate.
Today's event depends on earlier efforts at the Internet Governance Forum in Riyadhs a mentioned last December and the your tick in Strasburg this May.
Life and death decision T risk of dehumanisation, the value of empathy and compassion, data bias and machine learning, as well as the issue of accountability. To name just a few relevant aspects in this regard. Panelists will examine ethical perspectives from their various angles.
For Austria, ladies and gentlemen, humanity is at the crossroads. We must come together to confront the challenges posed by AWS. This moment may well be the Oppenheimer moment of our generation.
Experts from cross disciplines are warning of the profound risks and irreversible consequences of an unregulated autonomous weapons arms race. There is urgency to finally move from discussions to negotiations on building rules and limits for AWS.
The longer regulatory efforts are delayed, the harder it will be to reverse course once these weapons proliferate Global.
Democratic countries in particular must recognise that this delay is against their own long‑term interest and to broader interests of humanity.
What needed now is decisive political leadership to shape international rules on AWS. We believe that today's multistakeholder exchange will contribute significantly to this shared goal. And we count on your continued engagement on this issue. I look forward to a rich and constructive discussion. Thank you very much.
>> WOLFGANG KLEINWACHTER: Thank you Mr. Ambassador. And when you argue that ‑‑
(Applause)
Yeah. Good overview about where we are and what are the challenges. And when you say that we are at the crossroads, then it has to be taken out of the hand of a small group. And we need a pro toe understanding. More IGF and here of course all stakeholders represented. So we get the full picture. And the various perspectives. And so I'm very thankful that we have also on this panel the industry perspective. And Benjamin, I hope you are online. And now I invite you to give your perspective.
We started the discussion in Strasburg a couple of weeks ago. And here is now part 2, Benjamin. You are welcome.
>> BENJAMIN TALLIS: Good afternoon to Oslo from Berlin. I can hear you well. I hope you can hear me?
>> WOLFGANG KLEINWACHTER: Yes.
>> BENJAMIN TALLIS: Very good. Thank you very much for the invitation and thanks to the excellent open remarks from Vint Cerf and in the ambassador that we just heard, which I think laid out clearly the terrain which we're operating in.
Very quickly, I represent Helsing, which is Europe's largest defence AI company. Largest new defence firm in Europe. And we were founded specifically to use AI to protect democracies. Reacting to the geopolitical change that we'd seen the last 10‑15 years, the founders founded Helsing in 2021 in order to specifically attain tech leadership for democracies so question actually get the defence that we need when we see a situation of democracies increasingly underThorton threat and increasingly unable to deal with that threat through advance id military means which in the past have indeed led to effective deterrents. We were losing our technological edge and faced a security risk. That is why Helsing was founded.
I think it is important to emphasise both what the ambassador and Vint Cerf said in different ways. We're in an AI arms race. And it is very bad to be in an AI arms race. But it would be far worse to lose an AI arms race to authoritarian states who do not share our values and wish to actively shape the world in their interests according to their very different values rather than ours. But we have to do this a way that strengthens our values rather than unundermines. Why at Helsing we see the weapons company of autonomous weapons systems as a competitive advantage. Why we invest so much in that.
Precisely for reasons mentioned by the ambassador is autonomous weapons systems, their effects have to be foreseeable. The weapons systems have to be reliable. They have to have traceable effects. And ultimately have to be controllable. And this is what we invest in as part of the new generation of explicable AI that can give account for why it is done what has done. And is much more easy to keep within the bounds that we actually set for it. And keeping it in the bounds that are set for autonomy is actually nothing new in military terms. This is in fact not in any way a revolution. But the continued evolution of military command and control, which is alms been based on the principle of delegation of bounded autonomy. Sometimes delegation to subordinate officer or soldier. Sometimes to a weapons system in particular.
And as mentioned by Vint Cerf, computers have been involved in this since their invention. But always weave been dealing with autonomy in weapons systems for an awful long time. As soon as you go beyond visual range combat, be it in artillery or misses. To a certain extent you delegate the authority of that effect to the system. Of course it is triggered by human control.
But also we can see with the quasiautonomous systems at past, be that smart anti‑tech mine, such as the Panzer of the 1980s or even pressure mines in the naval domain. There has been willingness to delegate to systems to actually conduct these effect.
Now of course that's precisely why the principles that were referred to before of discrimination, proportionallity and represent for right to life were introduced as ways of understanding, the ways that we have to be age to regulate these systems in order they have that foreseeable effect which is reliable and also has the effect we want to it actually have rather than indiscriminately targeting civilians for example.
I would put it that actually with the maturation of the revolution of military affairs, what we're seeing is the ability to conduct sensor and data fusion. So to use targeting that has triggered not only by one set of sensors or even by two sets of sensors, as been in the past with some very basic systems. But by a perfusion of different sensors that can actually greatly enhance the precision of the weapons that we are using and the weapons that we are at Helsing in particular try to develop. And will give our militaries an advantage on the battlefield.
Now that to me raises prospect of a less problematic rather than more problematic weapons system.
And now, we can see also that this is very important in the context of using the advantages in technology to leverage another of democraci' key advantage advantages which is our willingness to undertake delegated decision makings. Which is something our authoritarian rivals consistently struggle, with that China or be that Russia.
Why say this? Because the reconnaissance strike complex at the heart of military affairs which loosely means linking a greet degree of huge variation of sensors with a much more diverse and broad proliferation of shooters as they are known or effectors such as strike drones, this only works to its full effect if you allow delegated decision making. And for years, the problem with this was that we didn't have the networks that could actually handle the level of data that was being shoveled through them. Now with edge computing and the development of platform and system level autonomy, we only have to send back the data through networks they need. And I so it no longer crashes that. Increases the capacity of mission command and delegated and distributed control to levels more likely to be relevant to battlefield target.
Now in response to the point that was made earlier about saturation of battle space, the complexification of the battle space with the a huge proliferation of variety of different threats but also a variety of different potential target, then clearly there is a desire among our militaries to engage in semi autonomous or autonomous targeting within certain battle situations.
However I don't think that poses the kind of risks we often talk about in this regard because actually that would be engaged when there is a clear attack that is not necessarily involving a mixture of civilian actors plus military actors. But much more likely to be undertaken by military when there is only military actors or military intelligent machines involved.
We can see that actually panning out in Ukraine where there is at its highest level of development solver. With the massive distance that is evolving between the front lines.
That distance is because of inability to manoeuvre and within that space you do not get free civilian movement. That is an area basically military only. And which is not available even very much for military movement. So seeing what is going through those battle spaces, which is where these semi autonomous targeting systems which are still in infancy and not something we actually particularly engage in would be tested.
I don't think it raises quite the same questions and important the draw this back to other concrete example, what are the concrete realities of battle we'd be dealing with, rather than focus ox abstractions that might actually take us further away from being able to the defend democracies from being able to use it as a form of deterrence and put us in a negative position in relation to AI arms race.
Now. Last point to finish up. It was mentioned this might be an Oppenheimer moment. I agree fully. And the attendees at the conference or participants might be interested to read today's website of the economist, where my boss makes a call for Manhattan project for AI. Because this is the scale of challenge that beactually face. If we lose this race, we're in deep, deep trouble.
So we have win bit at the same time do it in ways I've outlined that actually strengthen rather than underline our values. I'll leave it there.
>> WOLFGANG KLEINWACHTER: Thank you. Have of and I hope Anja Kaspersen is now online and will give us another perspective from the technical perspective and the civil society. Anja, are you there is this.
>> ANJA KASPERSEN: I am indeed. Can you hear me okay?
>> WOLFGANG KLEINWACHTER: Yes. We can hear you.
>> ANJA KASPERSEN: You do ‑‑
>> WOLFGANG KLEINWACHTER: My best wishes to you.
>> ANJA KASPERSEN: Thank you. Yeah. I'm so sorry for not being able to join in person as originally planned. I had an accident and surgery. Apologise everyone.
And, you know, Benjamin and I have been in this panel before in Strasburg. So like to deep team us up together. I will try to refrain from comment.
Thanks to contribute to this discussion.
And I speak today in my capacity as representative of IEEE. Comments and personal reflections I will make clear. And just for those not familiar with us, which I assume some in fact room. We are the world's largest independent technical (?) spanning over 190 countries and bringing together nearly half a million engineers and scientists across all domains and disciplines in the technological space.
So my remarks do not represent any political position, but rather reflect long‑standing engage from our side that dates, you know, back to the early days of the internet and autonomous robotics and Vint Cerf particular knows this organisation really well. He's recipient of some of our highly regarded awards. Over the course of many years. But we, you know, our engagement has been very strong for many years in foundations of technical governance. In terms of what is happening in multi lateral space, around institutional design of these technical autonomous systems.
My role for those, some may have seen the talk in Strasburg so I try to bring something new. But I've been in involved in conversations around military applications and uses of AI for quite some time. Ranging over a couple of decades now. Including overseeing some of these processes in Geneva and the convention, certain convention weapons. In the early days and into sort of the more mature stages of where we are now.
What I want to offer here is not a summary of technical challenges per se. Many of which are now widely acknowledged. But framing. Space on I will say decades of international and cross‑sector work. And of what is structurally at stake.
These are not political reflections. They are institutional and infrastructural. And claims caution against are not just hypothetical anymore. They are increasingly being made and I believe demand rigorous scrutiny.
First we must stop treating AI as a bounded technological tool. AI is not a weaponing system in the traditional sense. As Vint Cerf also pointed out. The as sociological and methodology, approach, system of meths we organise how war is imagined, operationalised and (?). Shifts the burden away from judgement accountability and to (?) coordination and automation. Inform doing so reconfigures (?).
Growing assertion that AI cannot only support but embody commanders intent. Commanders intent is not a checklist or input. It is a deeply human concept. Articulation of purpose. Risk tolerance, values and trust designed to guide (?) uncertainty. Human to human operations it is already complex. In human‑machine interaction it becomes nearly impossible.
And there are many brilliant scholars and strategists written extensively about this. I'm happy to share some of their work if it is of interest to those listening to. This.
System it is a simulate coherence without understanding of being asked (?). Requires and these context reasoning and values are highly fluid as well in battlefield context.
Trained to override, interpret. Exercise judgement. These are tactical and moral faculties that currently machine learning systems find it hard to replicate.
(?) a leading expert... Conceal critical reasoning gap, protecting fluency without understanding. Recent studies from Salesforce, Apple, IBM... trouble even so called large reasoning models collapse under pressure. Studies out in recent weeks. They generate confident (?) reasoning on multi step logic. One scholar refers to this as system collapse. Precisely when we ‑‑ (?) reliable. Interpretable and contestable.
And yet as Meredith Whitaker has recently cautioned there is increasing pressure to build (?) AI systems that operate with enough axis and autonomy to cross the blood‑brain barrier, this is her phrase, between localized systems and operating architectures. In privacy terms, already deeply concerning. In military contexts it poses risks not just security but to institutional legitimacy of using the systems.
Once a system crosses the threshold it begins to alter how decisions are made and who or what is in command. The financial times, Benjamin also referred to newspaper articles. There was a recent article in the FT that highlights how claims about AI's merger ‑‑ driven as much venture capital as verified capability. And many views on this. And I should recognise, definitely shares, you know, is important to represent that we all come from different viewpoints. But I think one shared concern which we also discussed? Strasburg is the concern over the narrative power of such claims.
And again as Vint Cerf noted military innovation and technology have always been entwined but AI marks a bit of a step change in my view.
It doesn't particularly extend capacity. It begins to reframe very nature of operational judgement. I'm convinced this is different. Which brings me to the issue of procurement. Most institutions including military ones do not build systems. They procure them.
Increasingly systems are (?) pretrainedded and instructed. Come wrapped in marketing language, (?). These terms suggest coherence and controllability but often obscure reality that such systems (?)al generalise poorly and fail silently.
Failures will not be system crashes but subtle misalignment between logic and lived operational context. This is IEEE developed a P3119, cross‑sector procurement standard applicable for any high risk AI system including those of defence. Designed to help, (?) and before a situation.
Not only for engineers but also policymakers. Legal expert ‑‑ (?). Engage in this space and trying to uphold companies and government leaders accountable. Because governance begins at level of specification. Equally critical is question of infrastructure. Any military systems rely on legacy architectures not designed for high intense compute loads. This introduces vulnerability, interoperability challenges and strategic blank spots.
Meanwhile large scale AI systems remain highly energy intensive. And not just a question of environmental impact. As matter of operation security and resilience. Any AI governance framework that overlooks role of energy and what type of energy, what type of materials is being used, infrastructure, or global supply chain fragility is not merely incomplete. It is strategically naive. And I'll come to my final comment now.
In this context it is unhelpful and irresponsible to frame (?) as a race. ‑‑ suppresses caution and elevate render narratives over institutional responsibility. Responsible governance is not a break on (?). What matters is responsible institutional decision making before design, during integration and long after deployment without this calculator the accountability becomes hard to trace and governance ‑‑.
I will close with reflection from my late mentor. And something Benjamin and I had in common. We had the privilege of studying with the same mentor. He warned against automation (?) war. Automation may abstract violence, (?) responsibility, obscure cause and effectem. But it cannot under any circumstances make war more humane. Nor should it that. Remains an ethical question and not one any machine should answer.
So much again Wolfgang for this opportunity to share some observations. And I know there are other people on the panel that's going to disagree with me. So we are looking at an interesting debate coming.
>> WOLFGANG KLEINWACHTER: Thank you Anja. And you have seen it is very complex. It is is complicated but it is good to have different perspective on the table. That is why we're here to get the full picture. And I invite now the commentators here on the table. I would start with Olga from the Global South. So that means we have now the broad spectrum. And what are you comments? How you site from Argentina. Olga is (?) defence ministry in Argentina. Minister of defence in Argentina and Buenos Aires.
>> OLGA CAVALLI: Thank you very much Wolfgang for inviting me again to this interesting conversation. We had last year in Riyadh. And I've been following it.
I would like to bring two perspectives. First from the academic side, which is what I'm going now. For the good news we is started new trainings on cyber defence. For those Spanish‑speaking audience and participants, we have a new career in cyber defence which is free, virtual and in Spanish. So anyone interested is able to apply for sight on it. We had very good response from the Latin American community so far. And we're still developing the curricula for the next year. We started this April with this new career.
And of course we have a focus on academic, focus on autonomous weapons. And I would like to bring this perspective and then some comments about the Argentina negotiations, which we ‑‑ I personally don't ‑‑ I'm not involved but I'm informed about.
So these focuses are included in the curricula of the career. And first ethics and autonomous weapons as has already been mentioned. Ethical implications of autonomous weapon systems, particularly focusing on the delegation of life and death decisions to a machine.
Analyse the consequences of reduced human oversight in military context. This has to be evaluated from academic perspective.
Human control, and responsibility. And include necessity of meaningful human control over autonomous weapon systems. It should be considered that the technical programming can be not enough to address ethical concerns. Has been mentioned. Like it depends on energy, on legacy and machines. And equipment.
The importance issue is that human judgement is irreplaceable. That has to always have to be in any training. And decision involving lethal force.
The bias and discrimination in algorithm decision making. This is ‑‑ this inclusion of artificial intelligence as one of the speakers mentioned. It is a reality. It is challenging but we have to face it. And the best we can do is understand it. The risk of algorithm bias in systems can disproportionately harm marginalized groups and complicate the distinction between civilians and combatants. Then the issue of fairness. Accountability and artificial intelligence.
Human rights and humanitarian law there is intersection of autonomous weapon systems with human rights and international humanitarian law. This weapons may exacerbate existing vulnerabilities as well as systems with high inequality. Latin America is a fantastic beautiful region with lot of diversity in nature and people. But there is also high inequalities, fragile institutions sometimes. And systemic violence in some places. So that has to be considered.
And palace and legal frameworks and academic focus must include the need for robust legal and policy responses. The need for legally binding international instruments. To prohibit autonomous weapons systems operating without meaningful human control. We're clue ever including all these issues in all the programmes.
So summarising moral implication of autonomous weapons, necessity of human oversight in lethal decisions. Risk of bias and discrimination in artificial intelligence‑driven systems. Impact on rights and international legal obligations. And need for binding international instruments and regulations.
And about Argentina, as I said, I'm not involved in those negotiations. But Argentina has been actively engaged in international discussions and negotiations regarding autonomous weapons systems. It is a vocal proponent of robust international regulation and oversight of autonomous weapon systemed for legally binding (?) and protection of fundamental rights and security and offer in collaboration with other Latin American and international partners has submitted draft protocols calling for prohibitions and regulations on autonomous weapons systems and I will stop here and maybe we have an interaction after.
>> WOLFGANG KLEINWACHTER: Thank you. And I hope you can also prepare some questions. I hope some time is left. But we have still two commentators on the table.
Professor Peixi is from the Beijing Communications University and he can give us China's perspective.
>> PEIXI XU: Firstly I would like to comment, Mr. Ambassador what ambassador said about the resolutions. I think you are referring to resolution 778‑241 on lethal autonomous weapons systems. I would somehow refer the adoption of such resolution to a moment that happened in the debate about sign cyber norms. Happened in the debate of cyber norms in 2018. Two parallel processes about the cyber norms debate. Why is UN GGE among the governmental experts. The other is OEWG, the open ended working groups. To have a lot of other actors to be involved in the debate over the cyber norms.
So I would say that the adoption of such a resolution as kind of 2018 movement now is say more actors can be engaged in the debate of AWS.
So it is a step forward in that kind of perspective. And somehow I'm also, I think some countries, lot of countries, 152 countries voted in favour of the resolution. And 4 voted against. Russia, and India voted against. Some abstained from this.
So the dispute here as I have observed, is that firstly, some countries would like to say ‑‑ would like to keep, for example, the CCW platform as the single platform to talk AWS or (?) to be exact. And they are arguing for a example of a kind of high‑quality result, consensus report. That episode also happened in the cyber norms debate. That debate I think United States stick to their position that there should be less members for the cyber norms debate. In that case there can be a high quality report instead of other distractions. So that moment repeated here.
And also there is a kind of dispute about in the Chinese perspective acceptable laws. And not unacceptable laws. That, I would say that there is a kind of resonance among countries. For example, Norway where we are now and also Finland, Sweden. And also France. They had argued in GDE session that there should be a division, for example, between AWS that, for example, that cannot ‑‑ if the weapon which is not, for example, apply with the international humanitarian law, they should be prohibited. So if they cannot apply with the IHL, they should be prohibited.
And if they can apply with it, then they are there. So that is kind of resonance in that aspect to solve the issue.
There is also the dispute about the definition. Whether the definition of a laws or AWS is clear or not. I think the disputes between state actors concentrate on these different aspects in terms of the adoption of such a resolution.
However, in the long run I would say it is a politically correct by the way to engage with more actors. Particularly the civil society actors. So that is response to what you have said over there.
And then I slightly disagree with what Benjamin from the industry was talking about. Particularly I don't agree with putting different ‑‑ putting contexts essentially different from each other into one block. That said. Russia is not Russia. China is China not Russia. China is not the United States. United States not China.
So it is not reasonable to put countries together. Historically speaking by the way, Russia is similar to Europe from the Chinese history books. So I slightly disagree with this kind of perspective of putting countries together.
And coming back, by the way, to this talk about laws. The Chinese perspective, when the Geneva NGOs visiting China, we had quite some conversations. So the Chinese official perspective as I understand here, is that if, for example, the UK, the United States and Russia accepts a kind of a prohibition of, for example, the laws or the weapons, total ban of everything, if the powerful countries accept the Civil Society groups ideas like a kill robeis kind of movement and then China is very much ready to be on board to accept such binding terms to prohibit the certain weapons.
So that is a kind of response to Benjamin and the categoryisation on illustration of good guys and bad guys and that is I can trade as kind of response to speakers.
>> WOLFGANG KLEINWACHTER: Very much Peixi. And I welcome now Chris painter who is now on board. Hope you have finished your other meeting but before I invite you to let's hear what Gerald Folkvord from Norway has to say. He's involved in the society movements ‑‑ robots.
>> GERALD FOLKVORD: Thank you very much. For MS international as human rights organisation the digital development in the world is central issue. We work many different areas. And have also to state right from the beginning but MS international has been using artificial intelligence in homework for a long time. We use other digital tools so we are far from trying to invent a world where digital development doesn't exist.
And that is very important. We also see the enormous impact on people's human rights that digital developments have in many areas. And we try to deal with them. And we are very concerned about the areas where people give away control. So that is also why amnesty so much involved is this issue of autonomous weapons systems and also right at the beginning want to thank you. That is actually my colleagues from the international secretaries, our actual experts who have asked me not to forget to say thank you to Austria for both arranging this meeting but not least for taking the initiative to take this discussion out of the exclusive club of the CCW.
Because it is ‑‑ this is an issue that effects everybody. Decade ago we worked with. And saw effect of. And came to the table and had their say. And then things started to change. It is very important those people who will definitely be mostly killed, discriminated, oppressed by the use of killer robots that they have a say in this conversation.
This is not ‑‑ I agree with some things. Benjamin said. I did not agree with other things he said. But this is not off ‑‑ it is very important. He has right to not to abstract, look at this as an abstract issue. I know he didn't mean it that way. I am interpreting it. But for me abstraction means to kind of look at it superficially and forget the people who are actually affected.
And our job as Civil Society is to talk about the people who will actually be affected by what happening.
And I also ‑‑ well some of the things I meant to say Olga has already said brilliantly. So I do not have to repeat them.
But from the human rights perspective, it is very clear human rights are based on human dignity and the very idea that machines make autonomous life and death decisions about humans is a contradiction to human dignity.
So this is something inherently dehumanising. And we cannot accept that. And the whole concept of human rights collapses when we outsource the tool to automatic systems. Not only because it is undignified to let the computer decide who allowed to live and who is allowed to die. It also underminus the whole system of protection of human rights and international humanitarian law. because legal agency disappears. Who do you hold responsible for a killer killing somebody in contradiction of international law?
And one of the things I disagree with of the things Benjamin said was when he says that we already have for a long time had systems that delegate authority and responsibility to weapon systems. I do not agree. Because the responsibility always as of today lies with the humans using the systems. And that is always have to be in place, a clear international legal system that secures accountability for those who contribute to violating international law.
Once that disappear, when we say let's put that to the machines, machines are smarter than human being, they will commit less mistakes than human beings and not least warfare by machines makes violations more invisible. It is much more, even more difficult for the victims to actually bring those who violated their rights to justice.
And I have suspicion that this is also one of the very attractive things about automatic ‑‑ autonomous weapon systems because warfare becomes invisible. The human rights violations, the atrocities that war crimes, they are not visible any longer. Our guys not die any longer in the battlefields. Everything happens by machines so therefore everything is allowed. And that is where we are going when we are not using this moment to regulate this very clearly and to state very clearly what is allowed and what is not allowed. And at least also give the industry very clear guidelines what we want them to development and which developments they should stay off.
Thank you.
>> WOLFGANG KLEINWACHTER: Thank you Gerald. I'm afraid we cannot settle all these problems here. And I'm also afraid that we will have no time left for interactive discussion. So wejust 5 minutes to go. And so Chris has listened to last two or three statements. Though he was involved in the debate in Riyadh and also in Strasburg. So you know more or less the constellation. So you have now the final words. And you can reflect about what you see. And then we have to conclude, unfortunately.
But I'm sure that this discussion will continue in the next IGF or in the WSIS+20 review. So so many things are on the table and have to be discussed. Chris. Thank you.
>> CHRIS PAINTER: Thank you and great to join you guys. Wish I was there. It is about a billion degrees in DC right now. It would be much nicer to be in Norway not just for the company but the temperature of.
Any expertise is in cyber and I want to bring as I have in previous discussions just a little reflection on what's happening with (?) cyber. UN some discussion of this.
You know I think one of the issues is stakeholder involvement and I completely agree with both the seriousness of the issue, but also the importance of having to go to stakeholders involved. I would say because of geopolitical and other issues getting stakeholders, key stakeholders involved from try and Civil Society including the human rights community involved is become much tougher.
In what's called the Open Ended Working Group in UN which is coming to its conclusion in couple of weeks. This has been a major area of debate. Lot of stakeholders are summarily blocked by just one country. Usually Russia. Which is problematic. Because their expertise in this area as well is critically important.
So it is the UN is in a difficult population to have that stakeholder discussion. Hopefully we'll get better. But right now I think that is one.
Other is important as this issue is actually we're working towards some kind of binding treaty, again because of geopolitical issues but also we haven't been able in cyber. Been a divide in years now. And we have as one speaker mentioned agreed on norms. And that was a particular time when countries came together and saw the common interests in agreeing on that. Including China, Russia and US. And then there was agreement more generally.
And that is important. That consensus is important: But I think very little is going to happen by consensus in bodies like the UN going forward. At least for the time being. And so that's something look at this and seeing in cyber space too.
Even on the agreement with respect to norms. There is now dispute about what those norms mean. And the biggest part applies to this debate too is how do you have accountability once there is agreement among countries to any (?). Whether binding treaty or just norms. Doesn't matter if countries don't abide by them if there is no way to have some sort of responsibility, some sort of accountability. And I think that is true here.
So, you know, ultimately I this is an important area for the UN to look at and outside as well. But also would note in caution something we've been talking about for very long time in time in terms of cyber is somewhat stalled. There's been some progress. Somewhat stalled and I think just as reality we're not going to make that kind of progress. But there are things we can do in a resolution other things to kind of start the conversation. It is going to take a while to happen. It is not going happen overnight despite what any of us might want.
On that maybe negative note. But on the positive note it is important we're talking about this and continuing to talk about this and giving the attention to this issue.
I would agree autonomous weapon issues have been around a very long time. But it has greater urgency now.
So with that, I'll close my remarks. Wolfgang, and let you sum up. And again, good to hear the discussion. And it is a very important discussion.
>> WOLFGANG KLEINWACHTER: Thank you very much. And I have to apologise also for audience and for the other panelists that no time is left to react. But this is invitation to continue the discussion online and offline. I think we have will opportunities in the future. My understanding is that, you know, the IGF will be extended. So looking forward another session in this format on the 21st IGF in year 2026 even if we have no clue where this will take place.
So thank you very much. And I hope you could get, you know, some food for thought from this arrangement here with the various perspectives. From the Global South and different stakeholders, global north and industry and Civil Society.
Thank you very much. We have three seconds to go.
And now it is over.
Thank you.