That's Fresh! Newsletter
Read a selection of our past issues.
- Google's answer to ChatGPTAnd: Generating synthetic data within relational databases. Let's meet at WAICF!February 8, 2023
- Understanding ChatGPT betterAnd: How to deal with imbalanced data. More about our productDecember 14, 2022
- A curated list of failed ML projectsAnd: How to build a data strategy. Clearbox AI and Bearing Point partnership.November 16, 2022
- Our open source library is now on GitHubAnd: Clearbox AI on Cybernews.June 22, 2022
- Discovering DagsterAnd: Quantifying privacy risks. Use case: a synthetic data sandbox to freely share data.June 8, 2022
- Can interaction data be fully anonymized?And: Synthetic Data for privacy preservation: understanding privacy risks. Discover our Enterprise solution.April 6, 2022
- What are GFlow nets?And: Improve models with Synthetic Data. Use case: augment financial time series.March 16, 2022
- The European Commission selected us for Women TechEU pilot project!And: What is Synthetic Data. The new Synthetic Data platform.March 09, 2022
- The EDPS on Synthetic DataAnd: From raw to good quality data. Changelogs: now you can upload unlabeled datasets.February 23, 2022
- 2022 Gartner’s Technology TrendsAnd: How to harness the power of AI in companies. Changelogs: new metrics available for your synthetic dataset.February 09, 2022
This week’s discussion topic is about data privacy applied to interaction data. Interaction data is usually collected by phone carriers, messaging apps or social media companies and, when pseudonymised, is generally regarded as safe with respect to privacy risks. However, in the linked article, the researchers of the Computational Privacy Group at the Imperial College, London, argue otherwise, showing that the supposed anonymised data is susceptible to profiling attacks.
In particular, they demonstrate that deep learning algorithms can be trained to perform successful linkability attacks. Linkability is defined as “the ability to link, at least, two records concerning the same data subject.”. The vulnerability to this specific attack means that interaction data should be treated as personal data even when direct and indirect identifiers are removed.
From a technical point of view, the fascinating part is how the researchers devised the attack itself. They first represent each individual from interaction datasets using interaction graphs describing interactions up to a specified depth. They then train a geometric deep learning model based on the interaction graphs to link individuals in the dataset. They demonstrate the accuracy of the attack on a few datasets, including a Bluetooth proximity dataset similar to that of COVID-19 contact tracing apps.
The interesting idea while reading about these sophisticated attacks is that data cannot be fully anonymised. As capabilities grow, for example, thanks to deep learning, we cannot take a simplistic approach toward resolving privacy risks when anonymity cannot be guaranteed. On the bright side, we notice progress in privacy engineering, risk quantification, and comprehensive assessment of risks rather than a checkbox approach. More on this in the coming weeks!
In this article on Nature, the researchers from the CPG at the Imperial College demonstrate that interaction data are identifiable even across long periods of time.
We care for your needs. That's why we offer a flexible solution of our technology to meet your demands, which can be installed locally or on cloud. Are you curious?
The first part of this blogpost series about Synthetic Data for privacy preservation introduces the analysis of privacy risks to better acknowledge how to protect data.