Everybody has something to say about the Facebook / Cambridge Analytica case. And I am annoyed by people saying that when you give your data to Facebook, you forego some parts of your privacy (true) so you should not be surprised (false). In simple terms, it was an actual data breach. Individuals who had not consented, had their data exposed. This was not supposed to happen. There are two aspects I would focus on regarding this issue:
The Facebook Perspective
From the Facebook perspective this is a plain data breach due to bad design. The design of the API allowed for this to happen. Technically it was not a hack. There was not a 'bug' in a software piece that was exploited and allowed that to happen. It was a design flaw. Intentional or unintentional, this is a different discussion. Looking at it under the upcoming GDPR perspective, if the API would have been released after May 2018, then a Data Protection Impact Assessment is supposed to have taken place, and such a design flaw would be punishable. Based on the $40 billion revenue in 2017, the penalty could go up to $1,6 billion. At the moment though, there is no GDPR relevance here.
The Cambridge Analytica perspective
Cambridge Analytica is a data analytics company. It lives by the data it collects. The company is irrelevant without data, and the quality of the services it offers is directly related to the amount of data it has. Cambridge Analytica engineers found that they could collect additional data and they did. They tried to improve the offering of their company. What would you do?
And what about ethics?
Here's the thing though: Let's say I work for an automotive company and I find somewhere leaked (e.g. on a poorly protected S3 bucket as is usually the case lately) the designs of an imaginative succesfull car company called Tigers. Do I use them? I didn't hack anyone, and I don't care how they ended up there. A design flaw from the Tigers exposed them. By using them I can add value to my company; I avoid the R&D costs and I take advantage of someone else's efforts, but without doing anything illegal myself.
Having said that, I believe there is a consensus that they should not be used. But if I cannot use intellectual property that accidentally came in my hands, why can I use private data that I happened to stumble upon? Shouldn't I report it?
As it goes, Cambridge Analytica did not really reported it, but they even bragged about it.
A failure at many levels
The way I see it Cambridge Analytica people failed to respect the intended use of data. They bragged about having the data and using it. But I may give them the benefit of a doubt; they may have not realized that their actions violated the subjects' privacy. At the same time Facebook people failed to close the loophole when they found out. Although Cambridge Analytica was vocal about having the data since 2014, Facebook actually changed the design in 2015 - some months afterwards.
But the most obvious failure is that nobody spoke up. Cambridge Analytica and Facebook people knew about this. It was all over the news and it was not even a secret. Yet nobody came forward to say 'hey, this is wrong', or 'hey, we need to inform people'. And this is where we all should be worried. Our expectation of privacy is based on companies respecting our privacy; companies are a set of individuals.
Among the (at least) 100 people that knew about it, I am very interested to find out who actually was willing to jeopardize their jobs in order to do the right thing and inform upper management, public and news about that violation. We know that speaking up is a difficult thing but unless people are willing to speak up, we will never be in a better place.
First posted on Speak up at work