IGF 2025 – Day 3 – Workshop Room 5 – WS #193 Cybersecurity Odyssey Securing Digital Sovereignty & Trust

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

 

¶ (Music playing) ¶

>> ENERST MAFUTA KATOKA: Good morning, good afternoon, good evening.

In a digital age where trust is the currency and sovereignty the fortress, the challenge lies in building resilient interoperable systems that are part both security and individual rights. Recent breakthroughs in technologies, artificial intelligence, quantum computing, innovative encryption methods and transforming digital ecosystems and redefining the cybersecurity landscape, they are shifting the power, dynamics between states, private entities, and users exposing vulnerabilities.

In critical infrastructures such as the recent ‑‑ espionage operation and fuelling challenges like deepfake misinformation and automated ‑‑ today's discussions, we explore how policymakers, technical communities, governments, civil societies, and private sector can collaboratively establish robust governance frameworks, principles rooted in security by design, resilience, and digital sovereignty to ensure global interoperability and trust.

So allow me to introduce my speakers. First, my name is Enerst Mafuta Katoka from the Zambia Standardization Technical Committee. I'll be your moderator. To introduce our online panellists, we have Atsen Bako, who is from Court for Africa, who is a security evangelist, representing the African region group.

and we have, also, Lily Edinam Botsyoe, from Ghana, who is a Ph.D. candidate in information technology at the University of Cincinnati. And we have our online moderator, as well, Gabriel Karsan. I'm sure he's online.

To come to the room, we have Dr. Issakha Doud‑Bane, who is African Internet Governance MAG member, and he also sits as an advisory panel on AI.

Then we have Bolutife Adisa, who is an information security ‑‑ I've just seen him walk in.

Please, you can join us.

And he's also an info chair at ICANN.

Then we have our special lady here, Ihita, from India, who is a cybersecurity engineer and program manager at CloudSEK Initiative.

I'll go now to Osei Keija, who is our public technology interest specialist and from the Ghana Youth IGF.

Last but not the least, we have Dr. Monojit, who is a cyber governance and a national security researcher at the C‑Joint Tri‑service Think Tank under the Minister of Defense, Government of India.

So, ladies and gentlemen, we're going to have five minutes to respond to the questions. From there, we're going to have interventions from the room and questions from the online speakers.

Without wasting much of your time, I will start with Mr. Atsen Bako, who is also online.

Now, Atsen, given the rapid evolution of digital infrastructure and technologies, how can the design be cybersecurity governance frameworks be optimised to strengthen both resilience and sovereignty while maintaining global interoperability? Specifically, how can policy evolve to address the operational strategic challenges posed by AI‑driven cyberthreat and quantum encryption?

Atsen, five minutes.

>> SAMAILA ATSEN BAKO: Thank you so much, I hope you can hear me clearly.

>> ENERST MAFUTA KATOKA: We can hear you.

>> SAMAILA ATSEN BAKO: Oh, awesome. Awesome. That's a big question, but I will try to answer it the best I can. I think the beauty of frameworks in our times today, where I would say at this point, over the years, they've been worked on, refined, improved. I personally believe there's no reason to reinvent the design.

I also think that we don't need to have too many frameworks per topic, per item because I think at some point, it's repetition of what has been done before.

What I think the biggest issue, in terms of what we're talking about is the differences in how it's been adopted or implemented by different organisations or other countries, which, obviously, we know ‑‑ the level of development in the country or the budgets that are assigned to things like this.

If I give an example using the cybersecurity industry, where I mainly work, you know, there's a popular NIST cybersecurity framework, which we just got a new version, 2.0, that emphasis the governance aspect of cybersecurity.

However, if organisations do not take their own steps to get familiar with this new version or to adopt it, then they will not try to be left behind, due to the ‑‑ integrity of ‑‑ as you have pointed out.

So when talking about frontier technologies or emerging technologies, a critical approach would be to leverage standards because standards are widely adopted and trusted ‑‑ with the right expertise ‑‑

An example, for instance, if you look at another Internet of Things, the security of them, there's this thing called the Oakland Web Application Security Project, OWASP, that released this IoT project to ‑‑ and I quote, to help manufacturers, developers, and consumers better understand security issues associated with Internet of Things and to enable users, in any context, to make better security decisions when building, deploying, or assessing IoT technologies. This means that both manufacturers and users have the guide, and even regulators can choose the guide as a foundation or template for what the baseline security should look like when it comes to IoT devices.

When that is enforced by a regulator, then you've raised the security of IoT devices because standards are recognised by everybody.

And I would also like to add there are other challenges.

I think one of the key things is the general state of development, digital access, subject‑matter expert or skilled workers, and, lastly, the speed of law. I'm talking about the process it takes to agree and update laws because they can become obsolete quite frequently.

There's a popular thing in this industry that the ‑‑ speed of light while the good guys ‑‑ and it will always be a game of catch‑up for us. We are at the mercy of the interest and priorities of law because regulators ‑‑ regulators have to be knowledgeable, reputable and appreciate the need of ‑‑ emerging technologies.

The users, manufacturers, and other stakeholders can see results.

I realise those in the room and those joining online, this may be scary. I think basically I've exhausted my five minutes. So I will pause there.

>> ENERST MAFUTA KATOKA: Thank you. You talked about standards. I come from the standardisation side, and one of the things that we have been struggling with is to find proper mechanism or adoptions of security by design standards and integrated systems.

Now, I will go to Ihita.

Ihita, you've been part of various regional organisations. One of them is ITU, and I think you've participated in the standard‑making process. And, also, you come from Asia‑Pacific. So how do you see these countries balance the need of a strong security systems with the preservation of digital sovereignty especially when deploying security by design within critical infrastructure like telecoms? How can this regional cooperation help guide sovereignty without creating fragmentation?

>> IHITA GANGAVARAPU: Thank you so much, Enerst. Hi, everyone, those joining us in the room and remotely.

First of all, I would like to say that the session title is very apt because we use the term "odyssey." It's been a long and eventful journey, especially towards digital sovereignty. If you look at a decade ago, it was very sacred ‑‑ you're seeing a shift in regional optimisation. Today, the narrative has changed. It's not just one part of the world but across geographies and countries of all types of government. It could be democracies, et cetera. Much of the change you see is from rapid digitalisation from APAC. We use ‑‑ to make all the payments in the country. This is widespread. We have health care, education, all of these are through digital systems.

So Internet and these systems have become critical infrastructure. Any disruption to this could be cyberattacks. It's something that economies and countries are trying to find mitigations against. Right?

So sense you mentioned security by design, it's not just something ‑‑ security by design is very fundamental, especially when you're looking at critical sectors like telecom. It's not just about stronger encryption or monitoring. It's about designing systems where what we're looking at is the country getting control over access.

In India, for example, the government has come up with a trusted telecom centre where procurement of different telecom equipments to ensure there's integrity and resiliency in what we incorporate in our infrastructure.

At CloudSEK, we conduct research. We have noticed that supply‑chain attacks have been tied to shifting geopolitical dynamics as well.

So security by design should go beyond compliance. It is something you are anticipating risks at a very systematic level.

I will bring in enterprise level. Sovereignty is not just about governments.

There should be no subjectivity in laws.

Digital sovereignty is demonstrated by complying with local laws, and it could affect them. They need to ensure there's no fragmentation.

You need regional cooperation to enable trusted data flows, shared security principles, mutual recognition of trusted vendors. Far from fragmenting the Internet, it can actually spend the whole process.

Just to add one last dimension to it, I like to talk about content and cultural sovereignty because digital frameworks from certain countries allow countries to manage content moderation in ways that reflect their cultural and linguistic norms.

And in the absence of this kind of regional cooperation and alignment, global platforms may overlook and ignore sensibilities that many may call cultural linguistic colonialism.

I look forward to your next set of questions.

>> ENERST MAFUTA KATOKA: Thank you, Ihita.

In the cyberspace, we say trust is an expensive word. The way you've highlighted it is something we've been talking and talking.

I will go now to Lily Edinam Botsyoe, who is online. With regard to trust and safety.

Lily, I just want you to highlight how these policies and public interest‑driven approaches can help sovereignty without eroding the same trust. Like, in your view, how does society confidence influence national security measures when quantum attacks are in play.

Lily?

>> LILY EDINAM BOTSYOE: Hi, everyone. Good morning, good afternoon, good evening, depending on where in the world you're joining for. I usually say I'm so thankful for the gift of the Internet. Today I'm thank. For time zone. It allows me to do this from miles away. I'm joining you at 3:15 from Cincinnati, Ohio.

I'm just going to dive into the conversation and use examples from what we've seen in research.

I want to make this whole process human‑centric and drive home the thought I have in mine.

I'm thankful for the audience in person and for those online.

Usually, trust is expensive. We have a complex variable, which is a human being. If you've dealt with humans, you know that humans would react to things differently based on how it appeals to them. Sometimes it's an emotion. Sometimes using perception. I currently do research when it comes to the privacy aspect.

I'm going to go through the aspects and break down what would help us be able to make sure that humans are actually at the forefront of all that we're talking about when it comes to digital sovereignty and a way that is processed and works for them and then those who accelerate the process.

One of the things I will start with is just a scenario. I was in a session where somebody spoke about community‑based research, and the person said something along the lines of somebody had seen a cobweb. We all know what a cobweb is. Spiders weave cob Webs. It's so president. It can stay for a while and somebody saw the cobweb. They tried to complete it because it was broken. The next day, the spider came. The researcher set aside to see what the spider would do.

Here is where the shock happens. The spider came in. In your mind, you probably think maybe the spider would be happy. It was broken, so you helped me. But the spider destroyed the whole thing.

It was asked why the spider behaved that way. We came to understand that pretty much the spider wasn't contacted, wasn't interviewed, wasn't asked if it wanted that to happen or wasn't asked their view on the process being done, the way it was done. And they felt that it was unnatural.

So that is what happens in this space of cybersecurity and the space of digital sovereignty. I will break it down. Usually, technology is fast‑evolving, and you build so much without usually thinking about the policies that are reinforced and robust.

We play the catch‑up game to try the make sure all the systems are protecting humans.

At the centre of cybersecurity and technology is that key factor, which is the human factor. If humans are not involved, if human views are not sought, usually, we see the adaption or the way that it is ‑‑ humans usually react to things we see when it comes to laws and policies and all that, and it would be limited. There's so much happening where people will be saying, Hey, let's go back and probably do the stakeholder meetings or stakeholder engagements, and sometimes that's reactive rather than proactive.

So this brings up the whole question of even how government, if there were issues of, say, cybersecurity or not, without involving humans, usually the buy‑in is very little.

I will give you an example, aside from what we see with the spider story that I mentioned where even spiders think they should have been contacted. And that's community‑based research.

We talk about multistakeholder engagement. When we involve more people from different backgrounds, it's legitimate and becomes accepted.

So, in my view, influence is the status of national security because humans are the ones that work with it and buy in to make sure whatever policies, whatever thing you're building is really robust, and they make sure that it's functioning. So you can do all the fancy policies. If you don't involve the humans that understand it, it's going to crumble.

So AI and cybersecurity, every time it feels as though you're drinking from a fire hose. When you wake up, there's crossfire. How much responsibility do we have as users? Maybe other stakeholders like developers, manufacturers, the companies, the government, what is the responsibility for us?

If you work? Cybersecurity, you probably hear things like the human is the weakest link in every system. That's why we're talking about trust. It really behooves to understand ‑‑ even outside technology, things like COVID, for example, there were companies who were very far ahead of us in the Global South because they had broken down stuff for the countries. Countries like Germany and Singapore, for instance, there were instances where the contact tracing was pretty much at the peak because their citizens understood what technologies were being used, how to access them, and for what reason.

We get results, and in that way, there's always something new happening. There's always a new threat. It forces humans to the forefront.

The way you can do this is that public interest approach. In that sense, if you don't have the human at the centre and you build it without a human‑centric approach, it will almost 100% fail.

This happened to me. I went on LinkedIn one day and saw that someone had shared an article about quantum ‑‑ you've heard about the possibilities of how it can bridge whatever encryption we know because of how fast and strong it is. They're talking about this Q day looks like. It's where everything that's encryption‑based can fail. Me thinking, as a user, well, what can that look like? Can it happen?

Airports are shut down completely. Even for somebody like me in research, if there's panic, imagine my mom who just really using technology and is thinking, What does that mean to us?

Some of these traits stifle our use of technology, and it calls on us within this ecosystem to make sure that everybody's responsibility is pretty much taken into consideration. The government explains this part. Everyone is playing their part.

For instance, there have been many instances where users have pretty much lost trust in the system because of things like the concept of ‑‑ you go to a website, and the company write this is nice label and banner telling you how to protect your privacy, but the actions are different than what they're saying, and you're wondering exactly what it is they're saying and what it is they're offering.

So, in the past, those things that happen with privacy and cybersecurity ‑‑ it's policies people don't understand.

For us in the Global South, we are playing a pitch of catch‑up when it comes to digital sovereignty. We're not sure about the data and where it sets and how it's processed. There's Big Tech and infrastructure and development. If there's something we have to do to make sure that there is confidence and trust, what does that look like?

I will put it in three buckets.

The first is embedding transparency in policy design.

It feels like a cliché, but, like I said, you have to put people from different backgrounds, from the different stakeholder groups in one room and encourage participation. We've seen some examples with open digital consultations, and if we are informed globally, that would be amazing.

Another thing, aside from this embedded transparency and policy and design, also think about investing in digital literacy. It's one of the big things. People don't understand the risks, whether it's AI or quantum encryption, they can't protect themselves.

So let's also push towards civic digital literacy.

I will end with building trust frameworks. We cannot do it alone and be in silos.

Some of the innovation you've seen is not within the ‑‑ and our response shouldn't be standard in thinking about national. We should look at how to coordinate a global approach ‑‑

>> ENERST MAFUTA KATOKA: Lily, your five minutes is up.

>> LILY EDINAM BOTSYOE: Thank you so much.

>> ENERST MAFUTA KATOKA: Thank you for your intervention, Lily. I like that you've talked about policy design. Now, there's something we always say. To have a nice policy should be forward‑looking and future‑proof. So in terms of trust, we need policies that are forward‑looking and future‑proof.

So, in terms of trust, we need policies that are forward‑looking and future‑proof.

So now, when it comes to also policy, I have somebody here, Bolutife Adisa.

You've been in the policy engagement. You write policy for ICANN.

I want you to talk about these policy measures that are needed for trust and security, in terms of digital infrastructure and against these emerging frontiers like AI‑driven attacks and quantum vulnerabilities, and, also, how can we make them adaptable to future challenges while maintaining stakeholder confidence?

Five minutes.

>> BOLUTIFE ADISA: Okay. Thank you very much. Hello, everyone. Bolutife Adisa, for the record. It's a pleasure to be here today. Because I have very limited time, I will just go straight into it.

When we talk about trust and security, I would like to first say that these are not just ‑‑ ‑‑ to reinforce trust and security, especially when it comes to technologies, I would say we need three foundation policy pillars. So I'm adopting this from a position of operating a critical infrastructure because when we talk infrastructure, we don't just talk protection. We talk resilience.

So in order to ensure resilience, first, we need to ensure zero trust by design for AI subpoenas. What is zero trust by design? It's a common buzz word. What it means is we never trust, and we always verify. So this is sort of a model that needs to be adopted because it's mandatory to have multifactor identification and other systems and to ensure that we don't have a breakdown that we end up regretting. It's also important to vet the systems, the models, as well as the data that feeds into the systems.

This also should not be a one‑off thing. It's a continuous process, and it should be done more regularly. That's zero trust by design.

Policy needs to identify AI threat modeling as well as red teaming for these AI systems.

So in the critical infrastructure space, it's a requirement that you work continuously to check the resilience of your systems, and this is very important because, like the previous speaker said, the attackers are moving at the speed of light. So it's important to constantly test the resiliency. When you're not being attacked, you need to make sure this is in place.

Lastly, someone talked about quantum computing, which is important. It's a race against time. The current encryption we have in place, how does it stand against quantum computing.

We need to think more post‑quantum cryptography.

How do we make sure the policy remains multistakeholder and efficient? I think, first of all, we need to look at the UK and Singapore, what they have done, in terms of sandboxing innovation. So you put innovators in a controlled environment to really test out and test the resilience of this AI systems. And this is very important. It's also a way to ensure engagement of the required people.

Second would be to have sunset clauses and policy APIs. Sunset clauses basically means that policy does not go forever. It gets to the point where it expires, and then you can do a review and see the policies and if they're still adaptable in this context.

Policy API is an important technology development, which we have machine readable policies such that these systems can spot violations by themselves, and this is also quite important.

Lastly, you know, like the IGF, we have the multistakeholder process. I think it's important that this is also still embedded in what we call AI governance or digital technology governance.

I think my time is up. So I would like to give the floor back. Thank you very much.

>> ENERST MAFUTA KATOKA: Thank you for that. I like the fact that you've talked about the multistakeholder approach in this and also talking about that, I will also frame it in the context of international cooperation.

As you are aware, as new technologies evolve in the market, threats are also evolving. So there's need for international corporations to ensure that we're in good standing and running at the same space.

I'm going to give the floor to Dr. Issakha Doud‑Bane.

You're in multiple organisations. One of them is intergovernmental organisations like AI Policy Advisory, the AU, and the UN.

Dr. Khouzeifi, I just want to get your perspective on how international cooperation can ensure that AI and cybersecurity respects sovereignty and human rights.

And, also, I just want you to talk about what safeguards are needed to talk about fragmentation and align trust and global standards while managing national interest and cybersecurity.

Five minutes.

>> ISSAKHA DOUD‑BANE KHOUZEIFI: Thank you. Good morning to everyone. I'm very honored to speak here today. I would like to thank the previous speakers. This is an interesting table.

Yes, my name is Dr. Khouzeifi, from Chad ‑‑ I founded the Global Youth AI advisory body; coordinating, also, the AI cyber diplomacy, as well, department and then coordinating, also, Chad IGF, and Africa IGF MAG member.

In the face of emerging threat such as AI‑driving cyberattacks, deepfake, disinformation ‑‑ espionage, international cooperation must be grounded in mutual respect for sovereignty while aligning with universal values of human rights.

Youth advisors have co‑developed AI for humanity Code of Conduct, emphasising AI for peace and security, freedom of expression, and responsible enforcement of international law.

In local perspective, at the last meeting, April 2025, we saw firsthand how the globalisation of information throughout social media challenges local governments.

So cybersecurity cannot be divorced from digital literacy, especially in regions like where I come from.

Vulnerability is one of the key issues.

We have initiates that link local realities to global frameworks, such as the Global Digital Compact through open consideration and public‑private partnerships. Academic as well.

So in conclusion, for interoperability, we need ‑‑ embedding human rights safeguards by design and sharing rooted in trust and security, and we need to harmonise cyber norms that balance with global security.

Trust is built through technical protocols but also throughout youth inclusion, cultural contextualisations, and AI governance.

Africa, in general, is not just a beneficiary. Let's commit to centralised models that ‑‑ diverse voices ‑‑

>> ENERST MAFUTA KATOKA: Thank you very much, Dr. Khouzeifi, for that.

I will now go to Dr. Monojit.

Dr. Monojit, you've done quite a number of research in cyber governance and national security area, most especially in the geopolitics. So you have expertise in these things. My question to you would be: How should government prioritise cybersecurity policies to serve their national interest in this competitive environment? What strategies should they adopt to balance immediate security needs with long‑term digital resilience, especially considering the geopolitical tension around digital sovereignty and the enforcement of critical measures?.

You have the floor.

>> MONOJIT DAS: Well, thank you, moderator. First of all, it becomes a bit challenging when you have predecessors speaking on everything, and you're coming on and adding something new.

Let me add my bit.

So, firstly, a disclaimer, that although I'm associated with the think tank of the Ministry of Defence, my views are personal.

What you mentioned about prioritising the policies, I will give you an example of what the government has initiated in India. We have come up with an approach of whole‑of‑nation approach. We have introduced a future warfare course. This is very much open. It's not classified. We have tried to involve the triservices but the other stakeholders. In today's time and infrastructure, it's not solely ‑‑ it's with the government. In India, undoubtedly, the world's largest democracy, we have to take care of the whole‑of‑nation approach as a priority to address these type of issues.

So what we have done is that we have brought in all the stakeholders, and my query ‑‑ or, rather, I would submit here that the basic understanding is we're giving our opinion. Sometimes it's contrasting and contradicting, as well, but we all remain united that there should be a central institution or at least the platform UN that we are currently sitting and discussing.

Given the passage of time, we see the international ‑‑ is shrinking. We need to have a good discussion between government to government at a larger level so that the platform remains signature.

Like, United Nations, as such, if you see ‑‑ there's failure in some cases. We should focus on building up and strengthening the centre body.

So I don't push for a multilateral forum. It should be a multistakeholder discussion, but there's really a strong need. We should come up beforehand cyberspace is no longer a ‑‑ it's a tool of warfare. So in the current scenario, what happens is every country is on the verge of making the first or on the first. So there's no threshold in a cyberwar. So what happens is there are accepted definitions by some countries that mentions a cyberattack can be ‑‑ with a full‑scale war. So what is the threshold?

Before a company decides its threshold and wages a full‑fledged war, I believe we will still need a further discussion at the apex level under the UN, which I feel.

In addition to this, assistance convergence of the UN and other countries, it should always start with something which is a problem to everyone. Like, for example, tackle the fake news. These are the common goals with every nation, whether they have diverging news, et cetera, they agree that fake news is a problem.

Slowly and steadily, we can find cooperation that can always help us to form some effective policy that can shape the way we are. Otherwise, every government has a different interest. For us, it may be different. For countries that are neutral to us, we'll have a different strategy.

In cyber, you cannot trust anyone because, always, the cyber actors, we trace it back to having a state‑sponsored support mechanism in some way or the other.

Since the previous speaker has rightly mentioned, the lack of international law that governs ‑‑ we need someone, some architectural body that can oversee. The existing mechanism ‑‑ or many is the case, many countries do not recognise if you see. So there should be some mechanism.

With this, I believe my time is close to ending, so I will hand it over.

Thank you.

>> ENERST MAFUTA KATOKA: Thank you, Dr. Monojit. You've highlighted quite a number of very important things. One that stood out is about how these fake news/deepfakes are emerging, and I think governments are finding it challenging to combat these deepfakes and everything.

It's creating a different perception in the minds of users. Most of these deepfakes, when they are thrown out, there's reputation damage and everything. So it's a challenge. So I agree with you. We really need to move to that stage.

I want to bring it to the civil society perspective because we know that civil society plays a key role in shaping accountable and transparent cybersecurity policies.

I'm going to invite Keija Osei. From your perspective, you've been in the civic space, and you're also a public interest specialist. How do you see civil society organisations influence the development of cybersecurity framework and to share their ‑‑ rights and social inclusion, especially when governments are deploying measures to safeguard national security? So how can civil society contribute to a balance approach that protects both sovereignty and individual freedoms?

>> OSEI KEIJA: Thank you. Thank you very much. It seems that the topic is very, very interesting. I would like to ask ourselves, the synonyms, something like expedition, very long. And, for the record, my name is Osei Keija. I would like to welcome everyone to this conversation. Such a long journey.

Do you believe that the future of cybersecurity lies in civil society? Just shoot your hand up if you do believe.

Good. Two hands. Three.

(Laughter)

>> OSEI KEIJA: Two? Three? Awesome. It seems everything is being shoved to us, pushed to us. Oh, civil society, do this.

I do acknowledge that civil society organisations play a very crucial role. They serve us the walk and indispensable ways of cybersecurity governance, but the question is: Who are the people there? Is it me or you? Everyone is involved.

So the term "civil society organisation" should not represent everyone. I do believe everyone should be involved. That's my first argument.

Let's get into it. What have we been doing in terms of all this conversation? It's a lot, honestly. Maybe my five minutes may lapse. From education to awareness, everything has been pushed to us. Civil society, do this. Civil society, do this.

One thing that's apparent, demanding transparency and accountability.

I come from Ghana, and we have a RTI, request for information bill where you right to governments to demand certain information. I liken that to the Freedom of Information Act in the U.S. where we request for transparency and all that.

But, as I mentioned, how many civil society organisations are there in certain marginalised communities or even that has access.

So in that regard, in that nuance, we need to activate something there. I do acknowledge that we all can't be activists and be at the forefront of things, but how can you and I contribute in a way and demand accountability, policy accountability.

It was mentioned about human‑centric approach.

There was mention of sunset clauses and all that.

You're a lawyer here. In capacity, what can you do to demand accountability? Have you written a letter to your Ministry of Communications demanding about surveillance? With regards to balance, which are legitimate powers, we need to activate something. How do we energise the base.

Secondly, co‑creating right‑centric standards, something civil society has been doing, and it's the case of the EU. I mean, most countries, where there's co‑creation of impact assessment for critical infrastructure.

There's human rights and assessments. I say security without rights is brittle. Security without human rights is brittle. We need to push for an inclusive, equitable human rights for the long‑term health of society.

Lastly, I would like to talk about strategies of civil society forging alliances.

We saw the case with Brazil when they had issues with WhatsApp. There's a government inside and Big Tech inside, trying to just make sure end‑to‑end encryption protocols ‑‑ I know my five minutes may elapse, but I would like to end that civil society should not be a ‑‑ public interest should not be a ‑‑ group. We're all involved. We all know we cannot all be in front, but in our own small way, in education, awareness, we can make things happen.

So we must continue.

I will quote one of my favourite people. We must continue with all intellectual, spiritual energy to prepare for the emancipation of the ‑‑ forces.

We cannot clap with one hand.

Try. It doesn't work.

Collaboration.

(Clapping)

>> OSEI KEIJA: So let's collaborate. Thank you.

>> ENERST MAFUTA KATOKA: Thank you, very much, Keija. We can all be activists considering that we are all affected in one way or another. So let us be activists. Let's not leave it to civil society alone.

Let's also come together, governments, everything, all sectors.

Now I will open the floor to the room. We're going to have some questions from the room. We're going to have about maybe 10 minutes for questions from the room.

Anyone want to take up some questions?

>> Or comments.

>> ENERST MAFUTA KATOKA: Or comments.

There's a mic there.

No one?

Karsan, do we have any questions online?

>> GABRIEL KARSAN: There's a hand here.

>> EIRIK AALAND LITANGEN: Question regarding ‑‑ my name is Eirik. I have been working with various companies ‑‑ for many years.

My question is: When you talk about trust, how can you expect trust if you don't have people making sure they have privacy? How do you make sure that the ‑‑ how can you believe that the border control can be creating trust because you need to make sure that what is actually provided is secure. So providing the secure service would be providing the trust. How do you think about this?

>> ENERST MAFUTA KATOKA: Anyone want to go first?

Ihita?

>> IHITA GANGAVARAPU: Sure. I think I would like ‑‑ so we've had a comment on trust from Lily. I'm not sure if she's still here in the conversation?

>> LILY EDINAM BOTSYOE: Yeah.

Yeah. I think I would prefer ‑‑ but I just want to say transparency ‑‑ I'll just put in the keyword, and then I will loop in after Lily.

>> ENERST MAFUTA KATOKA: Okay. Lily, you can go.

>> LILY EDINAM BOTSYOE: Okay. I don't know if you can see me. I'm trying to get my video on, but I will just start by answering. I mean, in essence, trust, if you look at it, it's multifaceted. It requires many things to happen for it to be achieved.

I give an example.

There's been the context of privacy washing. You go to the platform. You see the banner. It has a title that says, Hey, we preserve your privacy. We don't collect your data or sell your data. Then emails come and you wonder how they got the information to reach out to you.

The privacy has become so much out there that people don't trust what they see online and the action that follows.

So what we describes is actually true. There is an offering that has to come from whoever is the provider for trust to be actualised.

The actions you do, the promises given to the user, that the user puts in you. Users do not really trust that the system would even work.

So for it to work, we usually would say advocacy, but it's also ways that people demand.

We want to be able to understand. What you're putting out there, is this only to promise me you're going to do what you say you're going to do?

Who does it go to or who is it shared with?

Institutional awareness online, people can perceive whatever it is they see online by way of maybe pop‑ups, all these banners, they can comprehend what they're saying and anticipate any future consequences.

I would say that what you say is true. There's a need for people to trust it. Without the action and the service‑level agreement being fulfilled, users won't trust. Users won't trust for different reasons. It could be, yes, I trusted and my data was shared or I later found out you were holding my data for whatever purpose.

If you do what you say you're doing, it bolsters the trust and people can feel safe using your platform.

So that's what I have to say. I hope it's really helpful for you in the context that you were thinking.

>> ENERST MAFUTA KATOKA: Thank you very much, Lily.

Do you want to go?

>> IHITA GANGAVARAPU: Yeah, I think Lily has covered it.

>> ENERST MAFUTA KATOKA: Okay. Adisa?

>> BOLUTIFE ADISA: Trust goes both ways. You mentioned that whoever is the border control needs to ensure that whatever they provide is trustworthy, and I think that's extremely important. It's in line with what a lot of the speakers already mentioned.

Yeah, I think ensuring that the platforms are transparent as well as secure is the major recipe for trust, especially for end users.

>> ENERST MAFUTA KATOKA: Thank you.

Issakha Khouzeifi?

>> ISSAKHA DOUD‑BANE KHOUZEIFI: Yes. I would like just two words before we talk about trust, we need to make sure we maintain control and regulation. Whatever the language we use, users have to be sure that if they are online, that they are, first of all, safe, what they use is also safe and controlled. For that, we need kind of national mechanisms that would allow us to control this process. It's very important.

>> ENERST MAFUTA KATOKA: Thank you very much.

Anyone else want to go?

Okay. So we have another question from online. I think we'll take this first before we come back to the room again.

The question is from Tracy Upshaw. Do you think there can be a one‑size‑fits‑all set of universal cybersecurity standards that will work across all countries, or should there be another regional or even economic status‑focused approach?

You can, okay.

>> OSEI KEIJA: Thank you very much, Tracy. Very insightful question.

There's no such thing as one‑size‑fits‑all, but, as I mentioned, security without human rights is brittle. Whatever we are designing, it must take into account the people. They are at the centre of it. Is what we are designing for them good? Are their rights respected? That should be the answer.

Policy organisation may come in. It's very good. But there must be a human‑centric approach. And not be a design and after thought.

>> ENERST MAFUTA KATOKA: Thank you very much.

Anyone want to take that?

>> IHITA GANGAVARAPU: Maybe it's the sector or the countries. When you're dealing with countries, you're dealing with culture, economic diversities, and the resources that each country has to implement complex standards can be from a human standpoint. It could be finances. It could be just the local adoption and different challenges across that. Right? So that way, I think ‑‑ and not just that. The maturity levels in countries varies.

So there could be countries transitioning into having services ‑‑ to provide different services, and there could be countries like the U.S. or South Korea, who are extremely digitalised.

I think standardisation that is fitting all countries collectively without having inputs from different stakeholder groups, it's not going to work. This is what I think.

>> ENERST MAFUTA KATOKA: Thank you very much, Ihita, for that.

Also, for me, just to add on, I think it's very important for these international organisations, international standardisation, to be encouraging countries to follow these global norms that have been set because, normally, the challenge is that many countries are not following these global norms in their respective regions, so the cybersecurity gap which exists among countries because some take a national interest. So there's a lot of issues around that.

I won't take much of your time. I don't know if we have another question from the room before we conclude.

No question.

All right. So from each speaker, I want any key takeaways from your discussion, any closing remarks. Who wants to go first?

Keija?

>> OSEI KEIJA: Thank you very much. It's been an insightful discussion. My belief and desire, aspiration, that we will live here energised to work for the good of all humans to make sure that our security is the call of whatever we are doing. There's trust and privacy.

And, most importantly, humans are the centre for everything. Unless in our own individual capacity, our minds, everything, contributes a discourse.

Just like the topic says, cybersecurity ‑‑ it's a very, very long journey. Life is better in company. Let's all copilot.

Thank you.

>> ENERST MAFUTA KATOKA: Ihita?

>> IHITA GANGAVARAPU: Thank you so much for this opportunity today. What I understand about digital sovereignty, it cannot work in silos. You need to have a very layered approach. You know, we spoke about how regional and national and even global cooperation is required in this realm.

That's something I think we've been discussing, even in the GDC discussions, in how you need these structures and policies in place. When you look at the work being done in the policy space, it could be regulation standards. It has to be flexible, scalable, adaptable, adoptable by countries, organisations, and other individuals.

So I think I will leave it to that. Yeah.

>> ENERST MAFUTA KATOKA: Thank you.

Before I go to Dr. Monojit, Lily, any final closing remarks?

Then Atsen will follow.

>> LILY EDINAM BOTSYOE: I think I tend to talk a lot. I will end by saying this. Trust is not a by‑product of strong policy. It's a foundation of it. Let's build with trust in mind and not think of it as an afterthought.

Thank you.

>> ENERST MAFUTA KATOKA: Thank you, Lily.

>> SAMAILA ATSEN BAKO: We've spoken about things in multiple dimensions, pushing things in the right direction, whether it's AI ethics, zero trust, principles of ‑‑ while I admit that some of them are technical in nature, it's the ability of every person or every Internet user, it's worth noting that our IT security teams have some support. So I do not sound like a fearmonger, I would like to end with positive thoughts.

It's one set of security triangle. We have people helping us. When dealing with the risk, things like Generative AI or quantum computing that can break algorithms, we should not forget to enlighten the people and also adopt tools that are going to address the risks we are concerned about.

Remember that a knowledgeable person is your first line of defence. If you equip them with the right tools, they become your first line of defence.

Learn more, understand how policies affect you or your organisation. You will definitely be playing your part.

Thank you so much.

>> ENERST MAFUTA KATOKA: Thank you very much, Atsen.

Dr. Monojit, you can go next.

>> MONOJIT DAS: Yeah. When you talk about security, the biggest is whether we'll be building 10 schools or buying helicopters and ammunition.

In terms of cyberspace, either we're going to get the privacy or the security. Privacy and security, sometimes we find they don't come together.

So with this, I would love to mention that since my esteemed panellists are already working in multiple fields, let's do something that can pave the way for future collaboration. For example, in terms of tackling the fake news, I will say it's a challenge for every country that faces it.

Developing support. And how you see Wikipedia came up a few years back. It was not so supportive. I will not use another word, but it's developed. Somehow you will find some interesting or valuable information.

At least we can have something generated from our side, and it can be one point of contact.

Then we send in something, and it verifies whether it is right. At least it can start off. Like, the five‑year plan used to be there, with government and agencies. So we can have a five‑year plan for tackling fake news. So we can have something to pave the ways for other things. Diverging views lead to diverging parts. We may not converge sometimes.

>> ENERST MAFUTA KATOKA: Thank you, Dr. Monojit.

Dr. Khouzeifi, you can go next.

>> ISSAKHA DOUD‑BANE KHOUZEIFI: First of all, thanks to the participants for being here to listen to us. There is no good or bad idea. It's always about discussions. I would like to say thanks to my collaborators and friends here, panellists and also to you, Moderator, for coordinating this. I won't forget about Karsan, also, who is online coordinating the platform. Thank you, also, Karsan.

In conclusion, let me highlight this. Securing our digital future demands more than just reactive policies. It calls for proactive inclusion and grounded cooperation. From Nairobi to Jakarta, we must build a cybersecurity system built on trust and guided by human ‑‑ we are co‑creators of the digital compact. Let's ensure, together, that the AI service of humanities protects dignity of all.

Thank you very much.

>> ENERST MAFUTA KATOKA: Thank you, Dr. Khouzeifi.

Finally, Bolu?

>> BOLUTIFE ADISA: Thank you, everyone, for listening to us today. I think the panellists have really spoken well. For me, at the end of the day, I think trust and security are essential tools in building the digital backbone for the next century.

So when we think security, when we think trust, I would like us to think more, in terms of resilience because it's important that we don't wait for things to go bad before we look for solutions.

So, in the security sense, it's important to always test, always discuss beforehand, always see if your system fails, see if AI will probably not work in sudden aspects so that we will not fall into a situation where humanity suffers from the product of innovation that we see today.

So it's important that we test the resilience of this. And we keep talking, we keep pushing it, pushing the policies, pushing the necessary frameworks that are required to ensure that the systems work for our good and not for doom, like some people might think.

But thank you very much.

>> ENERST MAFUTA KATOKA: Thank you very much, everyone.

So my final takeaway is that I would say trust is the foundation upon which resilient digital futures are built. Our best chance at success lies with collaboration both reason our regions and across borders.

Only with open dialogue and concerted efforts can we secure our digital societies.

Let's continue this vital journey of building bridges of trust across our regions.

Thank you very much.

(Applause)