Your Data Privacy: What The Experts Think

Lauren Toulson
CARRE4
Published in
9 min readApr 28, 2021

--

Who should decide how our data is used? Can the same regulation be enforced globally? How do we get Big Tech to look after their consumers?

Our experts at the Bucket List Summit discussed these questions and more, and this article shares with you the main ideas from this discussion.

We were joined by:

Ivana Bartoletti, Author of An Artificial Revolution: on Privacy, Politics and AI and Visiting Policy Fellow, Oxford Internet Institute.

Tom Foale, Founder and CTO of Klaatu IT Security.

Junaid Qadir PhD, Professor at Information Technology University, Lahore; Human-Centered AI Researcher.

Zsuzsanna Felkai Janssen, Head of Sector at Directorate General Home Affairs at the European Commission.

and Humayun Qureshi, Co-Founder of Digital Bucket Company.

Here’s 90 minutes in 9. Watch the full discussion here, starting at 1:56:00.

Our speakers: Bill, the host, Humayun, Zsuzsanna, Ivana, Tom and Junaid.

Let’s have some initial thoughts..

Ivana: Let’s look at where we are at now. What has the pandemic meant for user privacy? We’ve moved towards more automation, more digital solutions, and connecting and working globally online. However, privacy has suffered through new workplace surveillance that is happening when working at home. Contact tracing has been collecting huge amounts of data — for good purpose.

However, we have seen with Cambridge Analytica how good purposes today can be turned towards commercial end in the future, and violating privacy.

Can we look at new ways of sharing data for common good, while also totally protecting user data?

Junaid: AI is a double-edged sword. It can be used for good but can also create harm. Surveillance capitalism is becoming the dominant form, where data is used to predict how people will behave and modify that behaviour — this is sinister. They can nudge you to behave for a corporation’s profit, and massive personalisation means everyone has a different reality. We are seeing the rise of filter bubbles and the infodemic: a lot of information and we don’t know what is junk and what is good.

Do you know if the information you find is real?

Zsuzsanna: We need to remember that AI is technology and contextualise it in technical and legal purposes. We have to understand the terminology: what is surveillance, what is biometric data? We need to understand what legal boundaries we have, and most importantly, transparency is key —

We need to keep the public informed to build trust and avoid misunderstanding.

Tom: The World Wide Web is fundamentally flawed, says it’s inventor Tim Berners-Lee. Due to an architecture problem, sites like YouTube have a malign incentive to create more divisive content, because it keeps people watching. This contributes to misinformation. Having privacy means controlling our own data, but that’s not commercially viable. To get the data they need, they have perverse incentives — that’s what needs to be fixed. We need to have platforms people actually want to use, and that’s an architecture issue.

Is it right to discriminate for the sake of fairness? To use private user data to reduce algorithmic oppression?

Ivana: Yes, it’s absolutely crucial to audit a system to identify every possible type of bias and address wherever possible, even if it means using special category data.

Junaid: We have multiple notions of fairness, and these are often in contradiction and we have to make trade-offs. And even if you omit categories of data, for instance race, other variables in the data may pick up the patterns and create those biases anyway.

We need to be enforcing fairness at the output, not the input. If you restrict the input, it can make the output even more unfair .

Following on from that, our audience asked Tom: Surveillance capitalism is focused on the use and abuse of data by Big Tech. How do we regulate big tech?

Tom: There are currently fines, especially most recently issued to Google and Facebook — Facebook in particular are using data in a bad way, evident by their platform. However, we need a lot more focus on the definition of what ‘bad behaviour’ actually is.

Facebook use the more addictive colour of red to draw us back into the platform — so they collect more data and money from us.

Do national strategies to privacy differ?

Ivana: Although there have been over 180 documents coming from different national companies about ethical principles, there are some similarities with a convergence on values, like the need to be taking a human-centric approach. Privacy is definitely different in China, but I think we have alignment globally on the main principles for privacy and ethical AI.

Humayun: I agree with Ivana, and the biggest challenges both regionally and nationally is enforcing these principles, and on an international scale the issue with it becoming a political game between China and the US.

Humayun asks the panel: Who should be enforcing these principles on a global scale while we work out who should be regulating internationally, and what?

Junaid: In 2020 a paper was published with 100 ethical principles, and East and West both agreed we need more justice and more transparency — but there are differences in AI’s use internationally and we need to be looking at the variances in values on their terms.

AI suffers from white man’s problems, especially with a Western perspective, and those western values are thrust upon the rest of the world — we need to be understanding and recognising how those values work across the world.

The western world focuses on individuality, and eastern on more collective values, therefore they are more willing to to exchange their privacy for the welfare of the whole society. Individual privacy is good but surveillance should be done to improve the welfare of people.

Zsuzsanna: Unesco, WEF, UECD have all been working on guidelines and the Council of Europe recently held a convention. Principles can vary from 100 to 7 depending on the level of detail, but how do you implement that in a concrete context? We need to be asking who are the big players, what technologies are developing? We don’t want so much regulation it creates hurdles, especially if these mean one country gets the technology while another doesn’t.

How can we enforce regulation?

Humayun: How can we enforce regulations? People can build systems in their bedrooms that capture data, it’s not just the big corporations.

You can put together rules but without enforcement, rules are just rules .

Ivana: What we have seen in the last few months is Uber and Deliveroo being taken to court and told their algorithms are not transparent. We need to be able to see how they are working. So we already have some laws that can be applied.

But on top of this, we have the issue that the AI may work perfectly, but its deployment becomes the issue.

For instance, even if facial recognition technology can recognise everyone of every skin colour and gender perfectly, its use can be discriminatory. It will still end up wrapping up around vulnerable people in society, while privacy becomes that luxury good that only a few can afford. So we also need to regulate the use.

Tom: We can drive industry in the way we want it to go with the right regulation, changing those incentives that make big tech companies align with perverse incentives. For instance, taxing waste incentivised recycling, and introducing corporate manslaughter pushed a focus on health and safety that wasn’t previously an issue.

They won’t do the right thing until regulation enforces it.

Sustainable and ethical company behaviour usually only begins with new regulations requiring them to do so.

So what input will regulation have on corporate behaviour?

Zsuzsanna: Public authorities are under much higher scrutiny, because there are higher concerns that the state would have exclusive power with data. So public authorities are regulated and addressed a lot more. This work now needs to be done on the private sectors, especially as surveys reveal public trust is higher in the private sector.

The problem is enforcement, and only a few people understand how the AI is working.

Returning to the example of Uber, the audience raise the issue that worker rights differ internationally, meaning only UK is currently protecting workers.

Ivana: What’s interesting about the Uber case is that the algorithm assigning shifts was discriminating because it didn’t recognise the reasons workers would cancel a shift last minute. In Italy, you can cancel last minute because strikes are allowed to be called last minute. So AI not understanding these reasons would be discriminatory.

This is not an issue for data protection and privacy law — it’s an intersection of employment law, consumer legislation and human rights legislation. If we want to safeguard our rights in the age of AI we need collaboration.

Do we need a new regulator? The public sector definitely has higher risks, because where do we go when our rights are breached by the public sector?

Worker rights differ from country to country. How do we equally protect workers working for global companies?

What happens when the Government tenders out facial recognition to private companies?

Tom: There needs to be transparency — why it’s done, and control over how it’s done. There needs to be a way to address the outcomes. I’m very much in favour of strong regulation in that area.

Ivana: Facial recognition isn’t just one thing, we’re getting used to it because we unlock our phones with it but that’s just one application of it. There is regulation for it because it’s classed as biometric data. But the main question is whether we need it to be deployed in all contexts. What does it mean to be watched, in the supermarket, in a public square? We don’t want to end up with technological solutionism where we just use it to solve the problem of security. We need to stop and question if it needs to be deployed in each context.

Are you aware police can deploy this technology in the streets today to profile you? The latest Netflix documentary Coded Bias reveals all.

Junaid: Moving ahead, meetings like this are important for improving discussion about the issues and solutions — but also the issue that Tom raised that companies act in their own interests, so moving ahead we need incentives to change company behaviour. It is upon us to create the right architecture, with technical fixes and policy making.

When we have such architecture, and hopefully soon, we can use AI for a lot of social good.

Key points:

Data collection that seems to be for good purpose today could easily become maligned for commercial use, violating privacy — as we saw with Cambridge Analytica. Big tech do not have the user interest in mind, and will only do the right thing when required to do so by regulation. We need to decide what ‘bad behaviour’ looks like, and drive the company behaviour with the right regulation.

However, regulation is pointless without the proper enforcement. Public sectors are under scrutiny because they deal with AI in high stakes settings, such as in the justice system and welfare, however private sectors get less attention regarding enforcement.

Nations as a whole agree with the same basic ethical principles like a user-centred approach and justice, but we need to be sure that those technologies designed with western values in mind are not simply thrust upon cultures with different approaches.

We need to be enforcing fairness at the output, not the input, which means utilising GDPR special category data (gender, age, ethnicity, background) to audit fair outputs.

Even if AI is perfect and totally unbiased, its deployment can be an issue and cause discrimination, such as using facial recognition. We need to stop and question whether the technology is really needed in each context, rather than resulting to technological solutionism.

You can watch the full conversation here.

--

--

Lauren Toulson
CARRE4

Studying Digital Culture, Lauren is an MSc student at LSE and writes about Big Data and AI for Digital Bucket Company. Tweet her @itslaurensdata