The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> Ladies and gentlemen, we are approaching 9:30. It's actually 9:30 already. I aim to start this panel in a minute or so, so please take your seats.
>> MODERATOR: All right. Welcome to this workshop on "Public diplomacy v. disinformation: Are there red lines?." My name is Sebastian Bay. I'm a senior -- we're a multinational accredited research body for NATO. We don't speak on behalf of NATO, but we are NATO accredited. I'm here today as a moderator. We have myself, the offline moderator and I'm also joined by an online moderator who will help me during today. The online moderator is Caroline Groene from Microsoft.
With us today on this panel we have Felix Kartte. Policy -- working election meddling online and offline I assume. John Frank the vice president for EU governmental affairs at Microsoft. We have Marilia Maciel who is a digital policy senior researcher at -- foundation. A nonprofit to increase the role of -- global governance and international policy development.
And then we have Goetz Frommholz who works as a policy analyst for open society, the European policy institute. He leads the advocacy work for the Open Society Foundation in Germany.
Now today, we will discuss are there red lines public diplomacy versus interference.
From the Swedish civil contingency agency where I headed the election project in 2017 and 2018 where we set up the protection efforts post-interference into the U.S. 2016 elections.
So the whole world came together after 2016 and realized what was happening and saw the need for an approach to handle this. One of the core things to deal with was of course definitions of what is going on. Now this development hasn't gone away.
Yes, we haven't had high profile cases like the 2016 presidential election, but we've seen an increase, a three fold increase in cyber related incidents during election processes.
So recognizing this threat and the increasing trend here, a lot of people, governments, Civil Society, industry, and different groups have tried to take actions to underscore the need for dialogue and to progress in different forms.
We have the Paris Call for Trust and Security in Cyberspace. And in 2018 there was the G7 Commitment on Defending Democracy from Foreign Threats. Work on this. We've seen a lot of national work on this, a lot of work at the European Union, for example.
Now the purpose of this panel is to talk a little bit around definitions and try to see if we can move forward in trying to understand what is disinformation, what is interference, how do we relate that to the normal processes of public policy.
Now foreign interventions in Democratic elections, they happen for several different reasons. And while some of it can be seen as a normal toolbox of Democratic states. We see that states interfere in other states' businesses. That interference ranges from normal diplomacy to negotiations, provisions of aid and in the most extreme cases, interventions, movement of troops, sanctions and ultimatedly, war. -- ultimately war.
For both malign actors and legitimate actors seeking to promote democracy or a particular political agenda, many of these tactics can look similar from disseminating rumors to damaging rival candidates of credibility. Especially when you look at national and local processes. It can be difficult to distinguish between these things.
When we look at the international level it can be even more difficult to see what should we see as malign or hostile interference and what is legitimate interference in these issues. We brought together a group of experts on a panel organized by Microsoft.
And we hope to talk and facilitate a discussion among the actors on this panel, on foreign interference, versus the use of disinformation, versus the use of public diplomacy. Can we draw a red line? Can we distinguish what is legitimate and illegitimate influence? To start out this discussion and I hope to lay this out with introductory remarks from the panelists. I will moderate the discussion. I have a couple questions of my own which I will pose to the panelists in order to try to further understand what is going on on this topic.
Then I hope to open it up to the floor and to online audiences. Is there anything that online moderator wants to say about participation in the online streaming here?
>> You can find the tool to ask questions online on the IGF website. We are looking forward to your questions.
>> MODERATOR: All right. Let's not wait any further, but let's kick this off with some introductory remarks. I've set to order Felix, John Frank and Marilia. Let's start with you, felix.
>> FELIX KARTTE: Thanks a lot, Sebastian. First of all, I think why this discussion is very important, there are very obvious red lines already, and that's especially when national election laws apply. And I don't think we have a very systematic overview of that, especially at EU level.
But of course, disinformation campaigns deliberately operating in gray zones where there's no applicable law to hold them accountable.
I think to cover these gray zones a bit better in the future, I think we should start out possibly by looking a little bit at how we think disinformation and online manipulation impacts different Human Rights. I don't think we have had this discussion to a sufficient extent yet.
Anyways. So at the EAS, the EU external action service, we have kind of set up the first E level attempt to countermalign interference at an operational level. The incoming (?) has put Democratic resilience high on the agenda, which opens the window of opportunity to us to thinking about beefing up and extending our current approach.
I think that in particular, there are three things that we want to think about doing. First of all, before we actually set out a definition, I think we still need to be clear on why we actually think we're mandated to address disinformation and manipulative interference. Why is this of concern to us as governments.
Secondly, and as you say, and that's the purpose of the panel. We should define more clearly what kind of state behavior or state back behavior we think is unacceptable. And thirdly, and I think that may even be the most important part. We have to enable Democratic actors, and by that, I mean Civil Society, researchers, but also a law enforcement authorities, where law applies, to identify and monitor threats to democracy.
So on the first point, a little bit, find or identify the normative background against which we are operating. I think we should make very clear in what ways we think disinformation campaigns constitute threats to democracy. We have talked a lot about privacy, Freedom of Expression in the past, but I think there may be other Human Rights at stake that we have not assessed so far. That may include freedom of thought, the right to vote, or the freedom to form and hold opinions free from interference. Secondly, when we talk about norms and rules, I think we also need further discussion on the conditions under which we think disinformation campaigns can actually constitute an act of interference. On the second point, now that we are actually talking about how to define such unacceptable behavior or these redlines. As you probably know, my colleagues from the EU (?) project at the EIS have mostly focused on exposing outright lies spread by Russian state media, that also reflects the European commission's view of disinformation.
And we think we should really continue and strengthen that work. However, we see of course, that the playbook of threat actors is much broader than just spreading false content. It's, as you all know, very often about, for instance, manipulating divisive domestic debates without necessarily spreading content that would be debunked by fact-checkers or anything like that.
Another reason besides these operational constraints is that sometimes it may seem illegitimate in the eyes of many for public institutions to act on the basis of a merely content-based definition.
We've all been called Ministry of truth many times. Yeah. So on the way forward, I think many EU Member States have already developed quite useful operational definitions of information manipulation in the French case or malign interference, I think the Swedish called it.
And I think it will be worth exploring to what extent we can streamline these different national definitions into a common EU approach. To give one example, and Sebastian, I think you have been very involved in the process of drafting your government's approach to election interference.
The Swedish contingency agency uses the four variables of deception, detection, disruption, and interference.
So however, once we have a more straightforward problem definition, we should clearly all together encourage online platform to also work with us on that basis. I was very delighted and surprised to read yesterday that all the big platforms have signed up to new contract for the web, which includes -- I think it's under principle 6, that they are planning to report on how they assess and address the risks created by disinformation on their platform.
That would be a huge leap forward, because that's not really the practice we have been witnesses so far on their part. As most of you will know, it's currently completely impossible for any independent outsider to verify whether elections have been compromised by these platforms or not. We have no idea about the actual scale of covert interference campaigns on Facebook or Google.
Or we get from Facebook, for instance, our sporadic news blogs when they tell us we took down another seven networks from Russia, but we have no idea how they attributed that kind of behavior. We have no idea what the scope and impact may have been of that kind of behavior.
So more transparency is clearly in order, and I think and hope that the condition will also address this through regulatory measures. That's pretty much it.
>> MODERATOR: Thank you, Felix. That's interesting. We want to go from the whack amole approach to a more strategic approach where we can prevent. When you talk about behaviors, is that we've used behaviors as a way to stay away from having to define exactly what it's about.
In Sweden, in 2017, as we prepared, we often talked about specific behaviors to help the election management bodies understand what is interference that we cannot accept and what is not. And I thought one of the things that was a clear red line that we used was to separate interference into the election management systems. I.e. trying to hack or undermine the infrastructures around the elections. Voter disenfranchisement, that is targeting the will and ability of voters to participate in elections. And there's a third pillar. Political interference.
Which is more difficult to define and which is something that we to a large extent stayed away from. Instead as a state we focused and targeted primarily countering interference into the election structure and voter disenfranchisement. It's interesting talking about behaviors then. I'm going to leave it to John representing Microsoft to give his introductory remarks.
>> JOHN FRANK: I would like to talk about how much we know today and how much basis there is in international law and in political discussions about where the lines may be.
On the ride over today I went by the actual springer office, which is a tall building. And before the fall of the wall, it was essentially used as a giant billboard to project western news into eastern Berlin. Germany. The participation of countries in other countries' affairs by what we call public diplomacy is long established and broadly considered to play an important role in Civil Society.
So if we think about where lines are, we know there's some positive things. And so BBC, broadcasts around the world. It is a professional journalistic organization. It has a perspective.
But it is not trying to provide information around the world. --
RT is an interesting example, because sometimes you could say it is offering that perspective, and it is a major news source online. And yet sometimes the news is seeded with what might be considered propaganda. And so then you run into the issue of the off Com UK is investigating the broadcaster of RT in the UK for is it abiding by the rules of impartiality for broadcasters. But still the fact that RT continues to operate would indicate that that's on the side of public diplomacy. In the political process, there are national groups including the National Democratic Institute and the international Republic institute from the United States which actively go out and work with political parties on building Democratic infrastructure.
And so that -- I think those are broadly accepted to be on the, if you will, the clear side. And then we run into information operations on the other side which raise concerns.
I had hoped that international law would provide some clarity as I try to sort out where this line might be. And I read several learned papers on international law. And I came away with conclusion, international law is not there yet.
We need to begin to have customary declarations at the political level before we can hope to get to international rules that are actually helpful in this space. I digress for just a second. There's this question of sovereignty, is it a principle or is it a rule?
And that's kind of where the debate is today. France and Netherlands have come out strongly. I believe Estonia. That it is a rule, which in the international system must be abided by.
The UK formally and the U.S. informally take the position that it's a principle, which can't be -- from which you can't really derive specific rules.
So international law today is not super helpful. When President Obama himself, a law professor, addressed the Russian interference in 2016 in the U.S. election, he did not characterize it as a violation of international law. But "efforts to undermine established international forms of behavior and interfere with Democratic governance."
So it's a political statement that he made about established international norms of behavior. Now consensus on this point, I think the leading source that one can point to is a G7 statement, which of course, just represents the group of seven countries.
But also, the Paris Call for Trust and Security in Cyberspace, which is a multi-stakeholder document that was launched a year ago at the inaugural Paris Peace Forum. Today, the Paris Call has 77 nation states signatories. And signatories are totaling 1034 from 85 countries around the world. There's nine principles in it.
And actually, they're challenges, which signatories promised to work on. Strengthen capacity to prevent malign interference from (?) actors -- malicious cyber activities.
That's, I'd say the most broadly accepted political statement we have that guides us.
When we try to think about well, how do you look at different activities and decide whether or not they are that malign interference by foreign actors. Domestic political activities, we know we're going to have many of the same elements of persuasion, and sometimes deception. But those are outside the scope.
There's sense within a domestic political process, the rough and tumble of politicses should enable far more there, but whenfore actors participate it's -- foreign actors participate, it's different.
I want to propose five criteria we can look at. The first is transparency. Probably the most important. When a country is transparent about what it is doing, it is much harder for somebody to object that it is malign interference. And so after he left office President Obama recorded and posted a YouTube video where he endorsed Emmanuel Macron for president of France. It was clear that it was him. He's clearly not French, but he's offering a perspective of his experience to the French voters where he's highly regarded.
So fully transparent. The activities of National Democratic Institute and the IRI, again, fully transparent.
Then the next question comes up, the extent of deception involved in an information operation. This is where you start -- truth is a very tough thing to define, but deception is an easier measurement of malign activity.
And so in the 2016 election there was the Senate report talks about efforts to seed divisiveness by feeding in some deceptive information.
And so in the classic example, the Black Lives Matter group, we're told that there was a protest being staged by white supremacists. White nationalist group was told there was a protest staged by Black Lives Matter, and the intent was to get them to show up at the same place and lead to civil unrest.
So the extent of deceptions' important. But there's also a purpose test as the third criteria I would suggest.
So this we get into questions about what you're trying to do. Now if I a government hacks another government's electoral system to change the voter registration roles, the process of voting and counting and publishing the voting, I think most people would agree that's clearly on the other side. That's not public diplomacy. That's probably foreign intervention and probably a clear violation of international law.
But what about doxing (phonetic). We saw the Democratic national committee e-mail servers hacked and documents released. So was the purpose of that to expose criminal behavior or was the purpose to sew descent. Last May, the Austrian government fell when a videotape came to the public.
And it was a tape of the far right party leader offering to trade public contracts for cash political donations. Was that foreign interference or was the purpose there to expose criminal activity?
And so you can have discussions around purpose. I'm not sure there's a clear answer, but I just offer those two points as an example. The other scale, and especially in today's world where social media and the Internet allow inauthentic actors the capacity to a amplify through bots and inauthentic accounts and with cross-platform strategies to very quickly execute campaigns of viral deception.
And so scale in this process does matter. And finally, I think the fifth point you'd look at is what are the facts? If a campaign doesn't really have -- what are the effects. If a campaign doesn't really have effect, it's harder to complain about. But on the other hand, if you're a lawyer trying to advise whether or not an information operation is appropriate, knowing the effect is pretty hard to do before you start.
So those are just some thoughts about we need to tease out and have discussions about which side of the line different activities go on. We also need to think about how do we address this not just through law, but through increasing resilience, by ensuring we have appropriate domestic laws that regulate foreign engagement in campaigns.
And also create transparency on social media platform. Precisely so we can understand what's going on.
>> MODERATOR: Thank you for that information, John. Proposing a test of questions, transparency, deception, purpose, scale and effect. I think we take that with us, because at least scale and effect are two new ones that I haven't heard before.
So let's continue to Marilia, and your opening statement.
>> MARILIA MACIEL: Thank you, Sebastian. It's a pleasure to be here. Thanks, Microsoft for the invitation. I think there are some principles that are recurrent in this stage that we're having here.
Such as nonintervention, sovereignty, self-determination. These are all principles that have been enshrined in information law. My first observation is although international law will not give us very precise answers to the problems that we're facing here, I do not think we are in the position to put aside international law completely.
We are in a moment in which perhaps we need to reinterpret international law to the context in which we are living. We need to see how we would apply international law, however, we do have some important principles in place that we need to take into account. I would add to this international law principles the concept of stability. This is a concept that has been very dear to the global commission on accessibility of cyberspace. This is a commission that has a multi-stakeholder composition. It worked for three years thinking, reflecting on the context of stability of cyberspace.
And they have just released their final report. And this report contains several norms on what is acceptable state behavior, known governmental behavior in cyberspace, and how we can preserve cyber stability. And among these norms, one of them deals specifically with the period of elections.
Public -- political warfare affects all these principles we have been discussing here. However, political warfare uses an arsenal of different tools which are not only informational operations but also cyber attacks. So we can interfere in elections by disrupting the infrastructure layer that enables elections to take place in the first place, such as tampering with electronic booths or software that allows elections to be carried out.
And that of course, targets confidentiality, integrity, and availability of data. But it also has an emotional aspect too. So I think it relates to the debate we're having here.
Our discussions are more focused on the cognitive informational layer. But I think that there's a growing relation twinship operations cyber operations carried out against infrastructure and information operations as we saw in the 2016 elections. Russia used both disinformation and attacks against infrastructure as has been reported by the Homeland Security in the U.S. to destabilize elections. So the global Commission recognized that the discussions on the problem of information interference is very important. However, if we don't have infrastructure in place, elections cannot take place. So this perhaps is a problem that we need to consider in advance.
So the norm that they have proposed says that state and non-state actors must not pursue, support, or allow cyber operations intended to disrupt the technical infrastructure essential to elections, referenda or (?) sites.
There are as we will see in our discussions, and we see even more acutely in the different parts of the world, different legal, political views on the thresholds of interference that would constitute illegitimate or unlawful interference. But we believe that a good place to start a discussion is a norm that protects the electoral infrastructure, because it is essential to rally support of the different parties that are engaged in this conversation.
And rallying support has become fundamental to the non-binding norms that are being proposed on the discussions of cybersecurity, because galvanizing support gives us a path to understand what is acceptable and nonacceptable behavior in cyberspace.
So I invite you to read the report and attend the session that will take place this afternoon in which the commission represents the different norms, including this norm on elections.
But apart from that, I think that observation that I would like to make is that the discussion about disinformation on elections has been marked by terms that carry a very acute degree of subjectivity. And I think that this is not contributing to the debate.
We ask ourselves what is legitimate or illegitimate interference, but these concepts are very hard to deal with. There's a huge debate on political social sciences about legitimacy, and this not really takes us anywhere.
If we have seen in terms of they speaking, western democracies -- historically speaking western democracies have been interfering in other western democracies for a very long time. There's a study that documents more than 117 instances of interferences coming from both the U.S. and Russia in the (?) and 2000. Coming from the Latin American region that has history of interference such as Brazil, Nicaragua and so on. I feel the distinction between foreign policy and interference has been largely (?), and by framing the debate in terms of legitimate and illegitimate interference or good intentions and malicious intentions, we are perhaps using words that are not very contributing -- contributing very much to the debate that we're having here. I think a better place to start is what John was telling us, and the criteria that he used here, which has been proposed by different scholars.
Duncan hallus has worked on criteria. Michael Smith as well. I think there are criteria out there that are helping us to steer the way in these discussions. There are documents that have been published by organizations. The council of Europe has a 2017 on (?) disorder.
Proposes two very important questions. The first of them is the information based on reality or is the information completely false. And what is the intent behind the information. By answering these two questions, we can perhaps categorize information disorder into three different clusters. We can talk about disinformation, which is information that is false and is created with the intention to do harm. We can talk about misinformation. Information that is false but not created specifically with intention to create harm. And we can talk about malinformation, information that is based on reality, but is used in a particular context to inflict harm. And perhaps the answer, the way that we address these different types of information disorder should be different as well.
The report also does something that I find very interesting. It tries to decouple information disorder into three phases. The phase of creation of information. Reduction of information. Transforming that idea into a media that is consumed online. And the dissemination and further replication of the information. And the motives of the actors acting behind each one of these phases are very different.
Some motives are political. But the motives of the trolls that disseminate the information online are mostly economical for example. So in order to tackle the problem, it's important to decouple that. I think we need to take into account the lessons of Laura Lessing, which are still relevant today.
That we need to tackle the regulation of Internet phenomena taking into account laws, norms, technical solutions and economic activities. Economic activities in particular, I think are very important, because the operational logic behind platforms today gives more visibility to misinformation, and the more clicks they have in their platforms, the better.
So transparency in reports is a good first step to address the problem. And it's a welcome step. But these platform are based on algorithms and filter bubbles that are being exploited to disseminate disinformation.
So we have made our social media the public spheres where our public debate is taking place. Should we have done that, is this really healthy for our democracies? And if social media acquired this importance in our democracies, are the current business models and standards acceptable? Should we intervene on these standards? I think that these are open questions that we need to address. Thank you.
>> MODERATOR: Thank you. One interesting thing I thought you brought up that went back to what I talked about, how we dealt with this in Sweden, is the clear red line when it comes to attacking election infrastructures.
And how we can work with that.
So I thought that was interesting. But also looking at the chain of disinformation production, which comes at, of course, at a different aspect of this. The chain is interesting, because there's different motives there that are involved.
OK. Goetz, the floor is yours.
>> GOETZ FROMMHOLZ: Thank you very much for having me on this panel as well. As Open Society Foundation, we are an organization deeply embedded in Civil Society, mostly through our grant-making. And we've been trying for a while now to understand the effects of disinformation, false narratives and (?) through multiple research projects we've been funding through our grantees. Right now we are probably spending about $7.5 million a year on trying to understand this phenomenon that is threatening democracies all over the world. But especially I think what we call the west.
So what we do basically is we do concentrate of course, on disinformation, but also abuse of personal data, monopoly of power, intensifying economic inequality, and online discrimination.
And because we are so diverse in what we are looking at, we try to find a more holistic view on how to approach this topic on disinformation, especially from the viewpoint of an NGO.
And just to give you an idea of who we're working with. We work with the EU disinfo lab, but also in Germany, many of you may know the.
[German name]
And what we have always been pointing at or what have grantees have been pointing at is that we are definitely living in a post-fake news era. So it's very important, and I do want to stress very important point that my copanellists have already pointed out. Issue of transparency.
It is very important, especially in the issue when we're talking now about false narratives. Many of you may already be comfortable with that term or know that term. When tiny bits of real information like statistics on immigration for example, is used in political agendas to form continuous narrative, that for example, like political organizations in Germany do that. Try to use this kind of data to frame that migrants in Germany, specifically, criminal, and they are part of that narrative to strengthen Germany's point on migration.
So we are looking not only at single point disinformation, but disinformation put in a larger context of stories. And being part of Civil Society, we always look at narratives. We always look at stories, because stories are what change perceptions and values within societies.
And this is probably the most important thing that concerns us in our work when we look at the use of disinformation today. John mentioned RT as an important player in this case, because RT is working on a large scale to provide a parallel narrative to what we measure -- in comparison to the reality we can empirically, actually measure. That's very important.
But it's not only new stations like RT. We must not forget, for example, Fox News in the United States as well. We've just recently had the issue that Fox News helped sharing the conspiracy theory that our deputy chair would actually be the whistleblower in the United States.
Which is of course, pruposttuous, and to be frank, an idiotic assumption. But platforms like that do add to narratives that actually harm the public opinion. That harm society. And this is something we should definitely take a look at.
But how should we do that? I mean as open society, we are of course, first of all, defenders of basic Human Rights. So our perspective is of course, we do not support just cracking down on content on platforms. Of course, we do believe that content has to be some sort of -- has to have a quality check.
But to go into the direction and just assume that everybody's personal opinion, even though it might be disinformation, is still in the realm of free speech at many points.
And this is something we have to take into account. And this is something we need to measure when it comes to harmful content. The need to be red lines, we all agree that illegal content needs to be taken down from the Internet like child pornography or terrorist propaganda, of course. But then we need to figure out ways how to measure actually harmful content.
From our point of view, this can be definitely things like hate speech, harmful content. It's definitely something that changes the opinion of entire groups within society towards a more radical point of view. Which runs in straight parallel to the clear and measured reality we can actually observe and explain.
So that's definitely a threat. But we do not see yet that we really have tools to address these issues. So I do have basically when we're talking about disinformation, false narratives and things like that, we need to address two key messages.
Which is on the basis of accountability. So on the one hand, of course we need to have transparency, but we also need to take those people into account and make them accountable, who spread this news.
So what we need is we need to bring legal certainty to the liability regime of Internet platform for third-party content. That's one important thing. And we need to tackle disinformation at the manipulation of public opinion by supporting, in this case, Germany in the EU-wide approach of upgrading electoral law for digital eras.
So these are tools that can help us understand further countering disinformation and narratives. But to be quite frank, I think we're still very much in the beginning, and we still need to understand a lot, especially between the connection between online behavior and offline behavior, and there's a huge black box we still need to understand whether and how online disinformation actually really influences offline behavior. And with this, I would like to close my remarks. Thank you.
>> MODERATOR: Felix was trying to protect his microphone.
[Laughter]
Thank you for that initial statement. Talking more about the need for a holistic view, focusing on content and looking at transparency and accountability and also raising the question of online versus offline, both in terms of I assume consequences, the actual effects of this. But also, I guess in the two different domains of influencing both online and offline.
Thank you for those initial statements. I think we've seen proposals that are overall, rather in line. We'll see if you agree with that statement. But we've seen areas that don't fully overlap as well. I'm going to switch microphones.
[Laughter]
So the way we're going to progress this discussion now is that I'm going to do some statements. And I'm going to propose some questions to the panelists. I think I should not use this microphone because we're getting static interference.
[Laughter]
My first statement goes back to the definition issue. We are here of course to figure out how we can define what is legitimate. The supporters of Paris, to prevent interference in political processes. But what then is interference?
During the Swedish election, we commissioned (?) university and professor James palmar to drop a research paper called (?) hostile influence, the state of the art. He argues that on a single incident you cannot fully decide if this single case is illegitimate. However, if you look at a chain of events analyses and look at several cases, you can determine whether or not something is legitimate or illegitimate using what also Felix mentioned, the DD diagnostic framework. Disruption, deception, interference.
This is based on, one, information influenced activities contain deceptive elements. Techniques of information influence obscure mislead and disinform. They are deceptive by nature.
2. Illegitimate activities are not interested in contributing to constructive solutions of a problem. They intend to do harm. For an example, by stoking opposition on both sides. John mentioned Black Lives Matter example where Internet research agency meddled on both sides of the discussion.
Actually, by fictitiously creating both groups.
3, international activities are disruptive -- they not only intend to do harm, but really do. As evidence said by destruction of property, creating riots, contributing to civil -- to different form of civil unrest for an example.
And 4. Information influenced activities constitute interference. Foreign information influenced activities. Sometimes via domestic proxies interfere in domestic Democratic processes. And sovereignty of state. They interfere where they are not necessarily mandated to be. They are somewhere else.
So taking those four words, they are deceptive, they have intent to disrupt, and they are disrupted by nature. And they constitute interference.
Now the DD framework says that this will help us draw a red line. We've had different injects here, and we've heard people propose a number of other terms as well. I'm going to take it at that.
And the first question will be is it this simple? Is this all we need? Is it possible to sit down and using simple terms to draw a line?
And I'm going to go backwards this time. So open by Goetz and say how do you relate to those four terms? Are they useful from your perspective in a way of deciding what is illegitimate interference?
>> GOETZ FROMMHOLZ: They are definitely good indicators to measure, to get a bearing on these issues, but as a social scientist, of course, I do not only like to put things on indicators, but there's much more complexity to it.
And like I said in my remarks, we need to take a look at the online world, how the online world functions, how it's being used by governments, but also by Civil Society, which is not always good. Also parts of bad Civil Society. So all different actors. So we also need to specifically take a look at actors and not only use these three points, but I think very much that's been stressed in this panel already that we also need to measure with a level of transparency about these issues.
So if somebody is posting information, we need to know from who it comes from, what it is. And this is a very good degree to measure, for example, deceptiveness. So the less transparent something is, the more likely it is to be disinformation.
So I think we need more indicators. We need more variables to this entire thing, because there's a lot more to understand than we already know right now.
>> MODERATOR: Thank you for that. Marilia, you talked about stability. Stability as a component of understanding whether or not something is illegitimate, if I understood you right. If you think about the DD framework and the terms proposed, is stability part or is there another component we need to understand what is illegitimate?
>> MARILIA MACIEL: Thank you. I think it proposes a good framework, and I think all these frameworks, that they are useful to us as methods of approaching the situation. And trying to decouple the different elements of the situation. So they give us a step by step to try to understand where we stand.
I think that the scale that has been added by John is something interesting, because disinformation is not something new. But the scale in which we see, the easiness with which people can produce disinformation and disseminate disinformation is something that we haven't seen before. So perhaps it calls for either new regulation or an interpretation of the regulation that we had before, because we are confronted with new phenomena.
However, once we have a clear method, a clear approach to the problem, woo eneed to ask our -- we need to ask ourselves, what now? So what do we do if we are confronted with a situation that is indeed destabilizing. And I think that stability is an important concept to put to the table. That what do we do? How do we make actors athtogether? That's why -- act together? That's why I believe norms are an important first step to make actors coalesce around an understanding that leads to action. Do something. Or inaction, refrain from doing something. And we need to use international law instruments that we have.
I think your next question is more related to the instruments of international law. And I think we need to understand in which situations Human Rights can be helpful as a framework. Sovereignty can be helpful as a framework. Non-interference and how do we interpret these principles that are legally binding in the situations that we come across.
>> MODERATOR: Thank you. Because I have a question for Felix, I'm just going to say now that if you don't agree with each other, please ask for injects, and I'll be more than happy to welcome questions.
I'm going to save up my questions until we're done this first round and then I'll do more interactive injects.
So John, you mentioned effects and scale. And I think that's interesting. But effect is difficult in the sense that we don't know that until afterwards, right? So how can we use that to determine when something's happening, whether or not it's over the line?
Is it only useful in hindsight?
>> JOHN FRANK: That's a really good question. I think the answer is, it depends. There are instances of course -- the scale of what you're trying to do. Maybe it's the purpose -- time, where the purpose is actual scale of effects.
We do have this long history, and Marilia's point is a very fair one, which is it's highly subjective as to -- well known foreign interference by governments around the world and things that if they were transparent, people would have a problem with. And yet, that's been kind of the accepted dark arts of intelligence agencies for some time.
But it is different today, it seems, where there's this asymmetry of open societies having virtually everyone connected to information sources in a very immediate way that just invites efforts. Asymmetrical attack on civil discourse. And so I think that maybe the scale I would come back to is well, what are we doing to create resilience, and maybe turning down the opportunity for foreign interference.
So I thought it was a very positive step, personally. When Twitter decided they were not going to take political advertising, because they don't feel they can control its impacts on their platform. Google I think has announced they're going to limit the microtargeting of their advertising platform. Again, because they can't necessarily control the scale and know what the impacts are.
So maybe it's a cautionary principle. And certainly, on Microsoft's ad network, we decided not to carry political advertising at all. So -- there's obviously one company that's out there that is -- believes its important to its business and to who they are, to continue with their current practices. And I hope they'll reconsider at least putting in some safeguards, because it really is the scale of reach of the Internet that this group has helped create, that does create the asymmetric possibility of interference.
>> MODERATOR: I think there is a lot of use to that, to thinking about scale and effect. And definitely, as you said, it's highly difficult to do that in hindsight, but we still need to take that into effect to some extent. You started with agreeing with the DD framework. At least proposing it as a way to look at these issues. We've heard three -- quoting -- that was a good question. Do you agree with it and you'll come back to that. But also the other things we have heard about scale, effect, talking about stability and going back to content. Anything you're disagreeing with that you've heard so far?
>> FELIX KARTTE: Well I do think it can be useful framework. First of all, I just wanted to say, as I understand the headline of this panel, we are not looking to basically create a new research design or find standards by which to censor individual speech. We're actually looking for variables by which to assess the legitimacy of state behavior. I think that is a very important distinction that I just wanted to make here.
Again, then in theory I would certainly agree with the need to look at scale and obviously, effect. But as you just mentioned, with the current lack of transparency and common standards to assess a measure of that kind of behavior, it is impossible. I mean we all keep referring back to the examples of the 2016 U.S. elections. So either in the meantime, we're having effect discussion or simply don't have enough access to evidence to talk about more recent examples. And I know that there are more recent examples.
So yeah. While in theory I really do agree on the need to find measures for scale and impact, I don't think it should keep us from taking the discussion forward on the kind of normative principles that we want to see complied with in information space by states. Because it would basically take a very long while to get there.
>> MODERATOR: You're right. We're here to talk about state behavior, but at the same time, this is becoming much more complex. Yesterday I spoke in a panel and talked about a recent example. We're looking at -- I was looking at a case in Nigeria where you see a Nigerian company using Russian software, content productions from the Philippines and being leveraged by both state and non-state actors. In a context like that of course attributing to states is highly complex and probably not possible at all.
What do you think? How do we relate to that and the fact that we often can't attribute to state behavior in that sense?
>> FELIX KARTTE: Well, I think attribution in other areas such as cyber, is also extremely complex and difficult, and it ultimately comes down to political calls. I totally agree. But I mean we can still progress on the accuracy and transparency of attribution on the (?), because we do see the platforms attributing to Russia, to Iran, to North Korea, Saudi Arabia. And it would just be, I think from my perspective, amazing if they could include in this methodological discussion, a broader circle of trusted researches, whatever. Just to make sure that they are doing their job well in keeping elections safe and clean, if already, we are outsourcing that job to private companies, which I find problematic. And in terms of reporting, I do find it very telling that in their transparency reports, for instance, I think they're quarterly, they would tell us about the volume of takedowns based on they adult nudity policies, for instance. But nor will they systematically report on thefore foreign interference -- I can't find that as policy maker and I find that very problematic.
>> Two points. One is we in fact do make an effort to create more deterrents and discouragement by announcing when we detect foreign actors attacking political clients, customers of ours. So we've announced [John Frank speaking] (?) related groups were attacking the United States Senate, the Hudson Institute, German marshal fund and the European council on foreign relations. We also announced an Iranian set of attacks on another set of customers.
So we do call out those things in part, because it's important to create more transparency and accountability. Hopefully that will discourage people.
But it is a fair point that we don't -- not everybody puts in transparency reports foreign interference, but I have to say, it's not so easy to know. If there were some secret data in the back drawer that said here's the secret report, why don't you share it, that would be one thing.
But I think we saw that some of the best work that was done on detecting it wasn't by the companies themselves. It was when the United States Senate hired some really good researchers to go through the data sets to put together analysis.
So that does lead to the question of we need to make sure we have regulatory frameworks that create access to data for governments so they can be evaluating these things.
>> FELIX KARTTE: For governments and even more so for researchers as well. Because we're still not talking about illegal categories of behavioral content. So I think it would be good to have a bit more of transparency for society at large, and especially for researchers who actually know to do the kind of detection and attribution work much better than either governments or the platform themselves do. Because if we look at most of the major takedowns on Facebook and Twitter recently, it was most of the time based on tipoffs that they received, indeed from NGOs or from researchers. I think they need to have a much bigger role in that. But otherwise, I agree of course.
But just to say, because you said, of course it's not easy for platforms to detect foreign interference, but it would we good to know at least that they're looking for it, and to understand how they define it.
>> JOHN FRANK: There's certain things like when the customer shows up offering to pay in ruples, you might think that's not the currency of the campaign.
[Laughter]
And there is a very good report by the Oxford Internet Institute that does talk about seven different countries having foreign influence operations on social media. China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela as countries based upon their research.
So I think it is very important to have researchers out there participating in this process as well.
>> MODERATOR: I'm going to spin off on that and ask you a follow-up question, John and give the word to Goetz, because I think that could we an interesting follow-on. Let's see if we can get some less static.
So Felix mentioned the role of companies in attributions. Should companies attribute states then? And that leads me to my second question, which is to attribute, do we perhaps need less anonimity online if we're going to have private actors do attributions? Do they need to be less anonymous, so to say?
I'm going to get Goetz to talk a bit about who should attribute if we have responsible state behavior and what's the role of an onimity.
>> We've gotten schooled on the differences.
>> GOETZ FROMMHOLZ: In a technology company we talk about attribution the way governments talk about attribution. We talk about it in a sense of accountability, identifying who did it.
>> JOHN FRANK: Governments talk about it as a political declaration of responsibility. So to avoid confusion, we don't attribute. Because that has special meaning for governments. But we do promote accountability. We think it's incredibly important to have plorabliability in the system, precise -- accountability in the system precisely because these activities are done the transparently. That is -- intransparently. We go out when we detect -- with customers' position, when we detect foreign attacks. I think it's 10,000 accounts we have now notified that their account has been subject to attack by a foreign actor.
So that is important to us. There is the same question that always gets into your sources and methods and if you reveal too much, you'll lose your ability in the future to make the same kind of detections. But everybody's got that problem.
I think it is important to create more accountability, and after the wanna cry attack two years ago, it was the first time that we and some other major technology companies sat down and shared essentially our scientific evidence of what do we know about this attack and who is behind it. And we shared that with governments as well.
And then we also took cooperative action among the companies to essentially disable North Korea's infrastructure across our platforms. So I think it is important that companies take action.
We need to take responsibility for the technology that we've created. The way it's actually used in the world. We can't just say we're going to be responsible for the way we wanted it to be used. I think the level of accountability for companies needs to go up as well.
>> MODERATOR: The participation in the Internet and share information under the blanket of --
>> GOETZ FROMMHOLZ: Anonymity has been a huge problem. I think we can all agree with that. And whether we are demanding more accountability, it is not onlying accountability to platform, but to people who participate online. When people are participating in sharing disinformation, they need to be held accountable in many ways. If they are just sharing disinformation, one of our grantsis in Germany is an organization that works with Facebook, and they have this Ponoku (phonetic) sign. So every time when a user tries to share content that's been identified through fact-checking as false information, this little icon shows up and shows them how long the nose of this little Pinocchio is, to demonstrate how bad disinformation this actually is. So this is a very, very simple way for example, to create accountability. But we are far further than that right now, because we are not only talking about disinformation when we're talking about participation in the Internet and accountability, of course, we also mean hate speech and stuff like that.
And things that cross the law. And in these cases, anonymity must not be the blanket to protect the people who disseminate this kind of information.
We need rules on the Internet. We need rules that are based on the user side, of course. Codes of conduct. But we also need rules to ensure if somebody is breaching, crossing red lines as we're saying, not only as a company, but also as an individual, we need those rules in order to view behind the blanket of anonymizkty and hold people accountabilfor the disinformation. For the harmful information they are spreading.
>> MODERATOR: Russia, GD --
>> Criminals, you want the cloak of anonymity to be lichted on them as well, or do we want -- lifted for them as well or have these rules for western democracies --
>> GOETZ FROMMHOLZ: We no longer operate -- Russia anymore. We are particularly taking a look at functioning Democratic societies where institutions rough democracy are working. Context like Russia, and China, we have a completely different monster to deal with. Of course, the anonymity is very important method in spreading ideas of open society values. But when we're talking in this context, it's very important to address that it's not necessarily Russia that's under attack. Russia is attacking free democracies, just like -- they would definitely have a different view on that, most definitely. But we are also allowed to have a very specific view on that, because we also have very intriguing evidence about that.
What I'm saying is I'm addressing that functioning democracies. Yeah. Sorry.
>> MODERATOR: I'm going to ask you Marilia to ask if you have any comments to this last round of questions. Before I do that, I'm just going to -- because after that, I'm going to open up to questions from the audience.
And by the end of the session, I'm going to ask the Rapporteur to come up with some of the main conclusions from today's sessions before we carry on.
But before I -- you don't have to put your hands up yet, because I'm going to ask Marilia to see if she has any final comments from this last round. Specifically relating to the issue of anonymity, attribution, and how do we deal with these things as we try to hold state accountable and determine where red line has been drawn.
>> MARILIA MACIEL: Very briefly so we can open the floor. I believe that it's very important to frame this discussion as we did in terms of transparency, accountability and improving attribution both in technical and political terms, and not bring the debate to questioning anonymity, because anonymity has been a very important tool to protect free speech and privacy online. And we just need to remind ourselves that the application of Human Rights in international arena in security debates is something that is still fragile. We don't have a consensus in the international debate if Houston apply or not to foreign citizens -- Human Rights apply or not to foreign citizens and they have to protect Human Rights as it comes to nationals. But United States, for example, (?) against a foreign citizen would counter Human Rights obligations internationally. So I think that we need to do whatever we can to protect the Human Rights standards that we have and the concept of anonymity has been a cornerstone for the protection of free speech and privacy.
>> MODERATOR: Thank you for that. OK. Now I will open up for questions. I would ask you to say who you are and who you're directing your questions from. I'm going to save up to about three or four questions depending on the scope of the questions before I let the panelists answer them. I'm just going to go down this line, because I see hands on this side. So I'll start there and then I'll take two and three gentlemen on the side.
>> Thank you, I'm HanZ cline from Georgia Tech. I'm concerned programs to combat disinformation can become programs to combat domestic descent. And I'm concerned that sometimes governments -- I wonder sometimes are they protecting the public or are governments protecting themselves from a descent in the public? That's a concern I have. I'll give you two examples of people I have interacted with. One is the journal si Hirsch, a investigator report er. He exposed the massacre on Vietnam, damaging to U.S. policy. He says he cannot get published anymore in U.S. media. He says he's effectively silenced. Second person. MIT professor ted postal, a former colleague of mine. Exposed the patriot missile failures during the Iraq waker and questioned the -- war and questioned the chemical attacks. His claims were very damaging to U.S. policy.
He also says he can no longer be published in U.S. media. Both these investigative reporters are on our team and other media outlets claimed to be fake news outlets. So I'm concerned that in fighting fake news, we're fighting descent, and possibly that efforts to protect the open society may kill the open society or damage the open society. A final comment. I'm doing a workshop on this topic related to this next week at Georgia Tech. But in any case, I'm concerned about protecting descent when we fight what's called fake news and disinformation.
>> MODERATOR: I'm going to take more questions. Is there any questions you would like to answer specifically or the free for --
>> The gentleman from Microsoft I found your comments interesting in seeking a balance between openness and combating it.
>> MODERATOR: Let's take the gentleman in the blue darkish jacket.
>> Good morning. My name is Juan Fernandez I'm from the Ministry of communication in Cuba. I'm a little disappointed in this, sort of supposed to be thorough examine of disinformation, with the exception of Marilia and passing reference to Fox News by the other gentleman there, nobody mentions the main source of disinformation in today's world that is the big media conglomerate of the world. And mainly the Americans, but not Americans, global in Brazil -- Spain, and some other countries.
And usually, this is done with the blessing, and even encouragement of the political lines of the governments in those countries.
As the previous -- from the audience says, we have plenty of examples. One of the most striking is the coverage that is doing nowadays of the unrest that is happening in several parts of the world. It's very different, the coverage for what's happening in Hong Kong, with what happened in Ecuador, and what's happening now in Bolivia after the coup or what is happening in Chile. It is very different. That is a way of using disinformation for political purposes.
And that is not new. Because we are being quoted of many distinguished academic articles. There's a book that it has also about manufacturing consent. That explains very clearly why this happens. So I will really ask the distinguished panel to be fair in the treatment of this issue and not only singlerrize some countries like Russia and whatever and all the usual suspects. And also to get into the country that has been record, that is the country that has more interference in Foreign Affairs of the rest of the countries of the world. And I think that we should dig in there before putting the blame in the other. You know, everybody has a glass ceiling here. Thank you.
>> MODERATOR: Yes, sir.
>> Yes. My name is Alexander, I'm a journalist and Civil Society activist from Russia, and member of Russian chamber federation.
I hear a lot of words about Russia as a monster in social media, so I'm a representative of this country. First of all, I want to say several words about interference in the elections. In our elections from abroad.
For example, this year in September, we had elections in Moscow, city council and during the whole election day, there were a lot of fake news about those elections, and they were spreading on the platform of Google in Google services on YouTube. But when the Moscow city election commission decided to publish answers to those fake news, they couldn't do it on the platform on Facebook. They couldn't do it on their own Facebook account because Facebook didn't allow Moscow city election commission. They didn't allow them to publish answers to this fake news.
And this is a real interference from abroad to our elections. And second thing is about (?) and red lines. For example, on Twitter and one of panelists have mentioned those rules.
A month ago, big new media project appeared on Twitter called the good news from Russia. One month and about six million views. And this project was banned. Twitter put it away, because Twitter didn't want good news from Russia to be published on its platform.
Without any explanation. Without any rules. Without any red lines. Just no good news from Russia. And the second -- the third thing. One of the panelists mentioned when new Oxford papers, Oxford researchers about social media.
So can I have a look after the panel on it, because you have such a lot of researchers about the bad role of Russia. So I want to know new facts. Thank you.
>> MODERATOR: All righty. I hope we can manage one more round of questions after we've gone through this. We having about ten minutes left for the panel. I'll ask you to keep it brief. I think we'll go down the line. Questions being. Is there a risk of actually infighting internal descent, is there a risk of false attribution or bias in attribution.
The role of politics in countering foreign interference. And how do we draw this line, which I think these questions come back to in the end. How do we keep this fair and balanced.
And I'll start with John.
>> JOHN FRANK: It is very important to preserve a role for Freedom of Expression while trying to create more resilience in the system. Fact-checking has been deployed as a possible solution. The solution I prefer is a group called News Guard, which rates on journalistic criteria, nine factors of the news site itself. And based on those criteria, gives a little green check or a little red check when the news story pops up.
But I think there, it's run by professional journalists and they have a very transparent process for it, but they're not trying to prevent news. They're just trying to give some indication about the professional journalism of the site. So I think that to me is one way to balance these things.
Certainly no one's claiming that there's never been foreign interference in Russian elections. To the contrary. I think we all recognize that this is a growing problem.
There is a concern though that we can have an escalating cycle where we continue to have more and more intervention. And I think that becomes destabilizing for the world.
>> FELIX KARTTE: So the EU has an actor agnostic approach to foreign disinformation. Evidence shows us however that Russian and Russian federation government and government proxies are the main actor in this field as regards to the European Union and eastern partnership audiences. We have documented more than 7,000 cases of Russian disinformation on ourpublicly available website.
(?) info.eu. I'm going to let the evidence speak for itself.
>> But show, show them. Show those 7,000 facts. Where are they published? Because we have --
[Overlapping speakers]
>> FELIX KARTTE: EU--
>> Information from foreign media speaking about Russia.
>> FELIX KARTTE: I just told you where to find the evidence.
>> (?) in Iraq.
>> I guess thing on that topic has been said enough. I want to address what you've been saying. I mean it's very unfortunate that legitimate research has been -- or people who have published critical research from reputable organizations, universities, have issues to become visible in the public, and outless like RT are the only means for them to become visible again.
First of l that's an issue on how -- all that's an issue on -- critical information. When we see whistle blowing and things like that. Critical information is also sensitive and there needs to be a societal acceptance that there are truths out there we may not like but are still factual.
So that's very important. But to use outlets like RT, I'm not saying that all information RT is broadcasting is false information. But what I want to go back to is to the statement I've made in the beginning. We're talking about false narratives. And false narratives, we have elements in the narrative that display factual truth. But are skewed and used in different contexts. I do not know the cases of your colleagues.
But we do not know how this information is being processed further and in what kind of agenda. We always have to be very careful when we're thinking about the channels we're using for the dissemination of information, not to only make sure that's the correct information being transported, but also have to think about what outlet that is, and what this outlet would do with this kind of information.
But it is a shame that your colleagues have not any access to other media outlets and possibilities to publish their information. And this brings us to the other issue that when we are talking about disinformation, I said that we need to talk about Human Rights there, and there's the Freedom of Expression and the society needs to tolerate various degree on different opinions and people are entitled to lie. That's not a crime. They can do that.
And we also need to be able to withstand that criticism as a society as a whole. We have a German name for that. That's the [German] the democracy that can protect itself. For that we need all members of society to expose false information, but also to support the good information that's all out there. Thanks.
>> MARILIA MACIEL: Thank you. I would also like to make a quick comment on the misleading confusion between disinformation and domestic descents, but connect it to the international discussions taking place in the UNGG. Because I think it's very interesting that historically speaking, the UNGG we had historic division between countries that framed bebaits as cybersecurity debates and framed as information security debates. And that division line was U.S. and Europe and China Russia on other side. But I think this confusion we're starting to see in Democratic countries like the U.S. and others, perhaps it points to starting on a convergence between the ideas of information and security and cybersecurity. And I'm wondering -- I don't have insider information, if any of you have, perhaps it would be good to have a discussion on that, how this will affect future debates in the UNGG. Just to respond to Juan on the concentration of traditional media, I think this is a very important problem we have in many regions of the world. Including Brazil. Our all outlet channels from media television to newspapers are controlled by four families that have a political agenda. Yes, disinformation coming from traditional media for sure. However in the last Brazilian elections we saw that the traditional media was speaking up information coming from social media. And social media especially WhatsApp groups, they were much more decisive when decomes to informing people -- comes to informing people and building opinions than traditional media.
So I think that there is reverse of the situation there that needs to be better understood, this process of fake news or misleading information. Using the rubber stamp of traditional media to reinforce a certain information and traditional media picking up misleading information from social media channels and not doing their due responsibility of fact-checking.
>> MODERATOR:
>> Thank you. I think while we've had different opinions especially when it comes to the thirdpeller, political -- pillar political interference. I've heard a lot of agreement when it comes to one red line and that is interference in election infrastructure and voter disenfranchisement, two clear red lines that are not acceptable. We have only three minutes left, so unfortunately I won't open up to more questions. But I will ask the Rapporteur to come with some final remarks regarding the main question of public diplomacy versus disinformation. Can we see a clear red line here?
>> Yes. So thank you. There is a lot to cover in this session. I'll try to pull together some key points as I see them and give -- try to be as brief as possible.
Some interesting criteria to measure disinformation. John Frank mentioned the five criteria. Transparency, extent, purpose, scale, and what are the effects. I think these are important way to measure disinformation and foreign interference. The importance of seeking a balance between openness and combating disinfo is a key headline of today's discussion and I look forward to integrating that into the report.
A lot of discussion about definitions, but perhaps clearer definition ons what might constitute foreign interference or domestic political activities. Discussion around the application of legal norms and frameworks and international principles. I think that is a conversation we need to continue to have. Is there has been overarching agreement on this is a phenomenon beyond what we have dealt with in the past in terms of scale and effect on society. Especially in functioning Democratic societies and we need to come up with brand new formulas to deal with the issues. I'll leave it there. Thank you.
>> MODERATOR: Thank you very much for coming and attending this panel. We have a minute left, and the key role of the moderator is to finish on time. So I'm going to do that. I wanted to stop by thanking Microsoft and arranging this panel and bringing together the panelists and for all the panelists for being here. My final remark is this will continue to be a discussion for the coming years and there's a lot of opinions. There's a lot of thoughts. And this will not be the final panel on this topic, we can be sure of that. Thank you very much.