Michal Wendrowski, Astec’s Commercial Director, discussed the Post-GDPR Reality at KuppingerCole’s European Identity and Cloud Conference, which took place in Munich from May 14 to 17, 2019. Watch the video below or read the transcript at the botom of the page.
Michal Wendrowski: “At Astec, we have many products that we’ve delivered throughout the years and there are two examples that I want to tell you about that show the real-world difference between anonymization and pseudonymization. So the first one is, we’ve built a mobile app for a utility company in the Czech Republic and before GDPR we actually had access to personal data of all their customers (their names, where they live and all of that). So after GDPR that was obviously not possible to continue, so what we did is that we’ve built a tool that automatically anonymizes all the data that we receive. Before our development teams actually start working on doing any kind of change requests in that software, the project leader first verifies if the data that is being passed to the team is really anonymized because there is no business for us to get that data or to work with it, at all.”
Moderator: “Is anonymized data still useful data?”
M.W.: “We’re still able to work on the change requests, so we require nothing more. In that case, we’re able to deliver whatever is required by the customer.
Then, another example is a product, a new product that we’ve built, our own product where we are a IT services company, we build software for others, and we have a big network of other companies like that. So it’s a Slack bot that basically just asks one question every Thursday: “Who was most helpful to you this week?”. This Slack bot requires to be installed on a Slack workspace, so we get all the information of all the users in that workspace, meaning all the employees in those other companies and if you’re an IT company, you don’t really want other IT companies to know who works for your company because that might make it easier for you to recruit those developers and stuff like that, so we had to put a real emphasis on data protection in that case. What we did is we built software that doesn’t even store personal data inside our databases, but only IDs that allow us to get the personal data out of Slack only when they are required. So that is a kind of pseudonymization that we’re using in that case. We couldn’t be using anonymization because its our own product, it’s pretty new, we have to get feedback and work on it. So those are two real-world examples of how you could use both approaches.”
Moderator: “If you’re saying that anonymized data still is useful information, shouldn’t it be protected the same way as the other personal information?”
M.W.: “It’s not useful in that case because we do not need it to do the software changes. It’s like we get a change request saying “there is a bug, e.g. this window is not able to be closed or the system is not working as fast as we’d like it to work, so we don’t need that data, but we need some kind of dummy data to just use the software.”