Metin Kortak, CISO with Rhymetec, talks about how organizations are approaching data privacy and security compliance, and thinking about risk management policies, when it comes to generative AI in the workplace.
Below is a lightly edited transcript from the Decipher podcast conversation.
Decipher Podcast: Metin Kortak
Lindsey O’Donnell Welch: This is Lindsey O’Donnell Welch with Decipher and I’m here today with Metin Kortak, CISO with Rhymetec. Thank you so much for coming on today. It’s really nice to speak to you.
Metin Kortak: Thank you very much for having me.
Lindsey O’Donnell Welch: Can you talk about your path into the cybersecurity industry and what drew you to the CISO role?
Metin Kortak: Yeah, absolutely. I have a computer science background, and when I first started working at Rhymetec, we were actually only offering penetration testing as a service to our customers, and then later on, we realized that with our customers, there’s this demand for becoming compliant with various cybersecurity frameworks, which at that time wasn’t my specialty – I was more of a network security person. But as we realized that this is a very big demand from our customers, we expanded our business more for compliance and providing cyber security solutions services.
Lindsey O’Donnell Welch: I know that you do a lot with compliance and privacy, and I wanted to talk a little bit about what you’re seeing there, specifically with AI being such a big topic over the past year with generative AI and the general availability there. How does AI fit into companies’ existing compliance and privacy frameworks, from your perspective?
Metin Kortak: Yeah I always say that because technology evolves so fast, laws, regulations, any sort of compliance frameworks, they always come after the technology has been created and actually built in a proper manner. We have been actually working with AI systems for the past couple of years but not until recently there has been some more compliance frameworks and regulations that became more solid. Recently we’ve been working with ISO 42001, which has been a recent cybersecurity framework that was really created to secure artificial intelligence systems.
But this framework hasn’t even been in place up until just a couple of months ago, and even with the auditors that we’re working with they’re not even yet accredited to conduct audits against these frameworks. So it’s all just very new and there are a lot of concerns from our customers because they want to make sure that they’re doing the right thing, they want to make sure that they’re complying with certain regulations. But at the same time, the regulations are not really available to them. So they don’t have a lot of guidance from the government or from other cybersecurity framework providers. So it has definitely been difficult, and what we have been doing is following these guidelines, and sometimes we have to create our own guidelines for ensuring data privacy on data security.
Lindsey O’Donnell Welch: Outside of the Biden administration’s executive order around AI and security, there haven’t been really any official types of things that people or companies can point to and say, here’s what we need to do about AI and privacy and security. I know in the EU they recently passed the AI Act that outlined some of the governance policies that companies need to follow. Is that something that is top of mind for companies?
Metin Kortak: Yeah, absolutely, we’ve been following the key frameworks, we have also been following the NIST AI frameworks that have been released but are not really being used by a lot of companies right now. But on top of that, as you know, GDPR, has been around for a long time.
And on top of that, in California, there has been CCPA for data privacy acts, and even if there wasn’t an official artificial intelligence cybersecurity framework, what we have been doing to kind of like get around that is ensuring that our customers are still complying with frameworks like GDPR, CCPA, while they are producing artificial intelligence systems because even though there aren’t specific AI guidelines, there are guidelines around data privacy and data security and we can interpret those guidelines and ensure that AI systems are still complying with those frameworks.
“It has definitely been difficult and what we have been doing is following these guidelines and sometimes we have to create our own guidelines for ensuring data privacy and data security.”
Lindsey O’Donnell Welch: Yeah, so it seems like the main approach here is to look at the the existing frameworks and see if those policies can encompass what we’re seeing with AI and lean on those existing ones?
Metin Kortak: Correct. For example, when we’re working with artificial intelligence systems, there are language learning models – LLMs- language learning models capture personal information and other data, and based on that data, they will yield results. And they continue to learn from that data. And when we’re talking about a data privacy framework like GDPR, end users do have the option for their data to be removed. So what we do is implement procedures in place so that their personal data can not only be removed from databases but also from language learning models, so that data cannot be used for teaching the artificial intelligence learning behavior.
Lindsey O’Donnell Welch: Do you see companies thinking about data governance at all, is that top of mind or people as it relates to AI, or are people mostly just diving in headfirst and saying, “Here’s this really cool AI application that we can deploy,” and then not really [thinking about] dealing with the consequences after?
Metin Kortak: Yeah I’ve been seeing a lot of companies just like jumping on the bandwagon. Whenever AI is out there, they’re like, “We have to do something AI, we have to do something AI,” and they’re working with all of these third-party providers, they’re trying to build their own artificial intelligence systems. But they’re trying to do it in a fast way because it’s no longer about data security governance and privacy, and it’s more about competing in the marketplace.
Everybody wants to make sure that they have some type of AI product because now it makes them better than the competitor that doesn’t. So I have been seeing very little attention to cybersecurity and data privacy when implementing these artificial intelligence systems because companies mostly care about how they can be better when it comes to their competitors. And because there weren’t a lot of regulation/compliance frameworks, it was almost like a free for all – you can do whatever you want, you can create your AI system, you can opt your users in, you can capture their data without really having some solid consequences from a legal standpoint.
I think that’s why a lot of those recent laws in the European Union and other countries have been making a bigger difference because companies actually now care more about data governance and privacy as it relates to artificial intelligence systems. But before that, what I have seen is that companies just try to utilize these AI systems as much as they can without having a lot of consequences.
Lindsey O’Donnell Welch: Yeah, that seems to be kind of the overall trend. When you’re looking at the data governance policies themselves, what I’m seeing for one best practice for companies that are implementing AI systems is to map out all the different data sources that are being used in the AI model training. And there’s so much there, right? It’s crazy. But a lot of the types of models aren’t really publicly available. So what’s the best way to navigate something like that?
Metin Kortak: Yeah, a lot of these companies are now using open-source artificial intelligence systems, meaning the AI platforms are learning from publicly available data, publicly available images, text, Google searches. So there’s definitely a difference between publicly available data versus privately owned data by end users. If data is publicly available, there aren’t any regulations there that prevent companies from using publicly available information. I can go do a Google search, I can use information I see from articles and other links that I see, and utilize that information to teach my AI model to respond in a certain way.
Where it gets more tricky is when behavior is based on personal information, like if a lot of people like the color yellow, and they say that they like the color yellow on their Instagram stories, or they say it on their Facebook posts or whatever, that information can be personal data, and if AI models are making decisions based on private information like that, then that’s when it becomes an issue from a data governance and some privacy standpoint, because now the AI model is not just learning from publicly available information. It is actually obtaining that data from individual user accounts and utilizing their personal information to make certain decisions.
“I think that’s why a lot of those recent laws in the European Union and other countries have been making a bigger difference because companies actually now care more about data governance and privacy as it relates to artificial intelligence systems.”
Lindsey O’Donnell Welch: I’m curious more from the defense side of things, how you’re seeing AI transforming actual cyber security practices this year. How does that compare to what you’ve seen in the past as well?
Metin Kortak: Yeah, so like I said, when I started working at Rhymetec, we were just in penetration testing services, and penetration testing is pretty manual labor. You have to understand what vulnerabilities are in place and then, at times, exploit those vulnerabilities in order to identify any issues with the networks, any issues with servers and other platforms.
With artificial intelligence recently, we have been seeing that AI models have also been used in aiding penetration testing, or they have been actually conducting the penetration test on their own by identifying security vulnerabilities and eventually exploiting them. Now, this is great from a pen tester standpoint because now they have an easier way to conduct these penetration tests and understand these vulnerabilities. However, it can also be dangerous in the hands of the wrong people, because that means now people have a much faster way of identifying and exploiting security vulnerabilities.
So how I see this impacting the future of cybersecurity is that I think in the beginning, it might be definitely dangerous because people will be able to identify these security vulnerabilities a lot faster, but at the same time, I think that if this practice became more common then a lot of organizations can also implement much better security controls in place and the standard for cybersecurity can be a lot higher.
Lindsey O’Donnell Welch: I think you bring up a really interesting point – this has been kind of one of the biggest discussions around AI – which is who’s this going to help more – the defenders or the threat actors? And when I was at RSA a couple of weeks ago, it seemed like the consensus was that right now the defenders and the ways that you know we’re using this on the defense side seem to be more sophisticated right now than what they’re seeing from threat actors which is kind of basic uses for content and phishing lures, things like that.
Metin Kortak: I think that if a sophisticated threat actor is actually attempting to breach a network, they’re likely not using artificial intelligence. I think that they’re likely using more manual and sophisticated ways to reach networks. But I think that on the defense side, absolutely, I think using artificial intelligence can be very beneficial. I think it can help us identify these vulnerabilities a lot faster, a lot quicker and then remediate them. But I think that if somebody is really looking to breach a network, they probably have a lot better options than relying on artificial intelligence models.
Lindsey O’Donnell Welch: How is AI being used in differing capacities in ways across different industry verticals, whether that’s health care or banking, and as a follow-up question to that, given the compliance challenges that each of these industries deal with, how is that a factor in how AI is being used?
Metin Kortak: So in the cybersecurity field, I have been saying that artificial intelligence has been used more in things like intrusion detection platforms to identify anomalies and suspicious activity. We already have intrusion detection systems in place, but they usually identify the anomalies and other suspicious activity and other security-related issues using a certain algorithm.
With AI, because it is using learned behavior, it is able to identify these security incidents a lot better than simply just following an algorithm. So we have seen that with things like intrusion detection systems, and vulnerability monitoring platforms, there is definitely an added benefit to utilizing artificial intelligence systems. In addition to that, we have also been seeing artificial intelligence systems and platforms, for example, answering security questionnaire services or like answering RFPs for customers. With those really tedious processes that take a lot of time manually, I think that using artificial intelligence has actually helped us complete those types of work in a much faster way.
When it comes to other industries like healthcare and banking, artificial intelligence is never 100 percent. It may give you a very solid answer and then it might give you a really bad answer the next time. So when an industry is impacting someone’s life, like when you’re in the healthcare industry, we don’t really see artificial intelligence being used that much because it is still unpredictable, and there are still answers that we can get that may not yield good results. I think that it can still be used to aid doctors and other systems that they’re using for healthcare, but I do not see it really being used for systems that might directly impact a person’s life.
“I think that if a sophisticated threat actor is actually attempting to breach a network, they’re likely not using artificial intelligence.”
Lindsey O’Donnell Welch: As a CISO, what do you see in terms of CISO interest in AI use cases and then also how it fits into security programs within companies?
Metin Kortak: Yeah, so recently, I’ve been seeing a lot of third-party vendors that we work with automatically enabling artificial intelligence learning models without really asking us. Especially if you’re using a SaaS product, there is a likely chance that if you go to the settings stage, there is an option to disable artificial intelligence or keep it enabled, and you will see that also the time it has been enabled by default. So we have been really just seeing that option enabled by default, and it has been really making our jobs a lot more difficult because it’s essentially a new product that’s being enabled without really asking our consent, and that’s creating issues with third-party security assessments.
So because of that, we have been actually reviewing some of our customers’ products and other critical third-party vendors that they work with and either disabling the AI tools or conducting further assessments to ensure that enabling AI will not really cause any compliance or other governance-related security concerns.
So that has really caused some issues with third-party security assessments. However, we have also been using artificial intelligence for things like answering RFPs, answering security questionnaires, analyzing logs, and analyzing security reports to better gather information in a much faster way. So I do think that it has been very valuable to us. I think that it has made our jobs a lot easier, but at the same time, we have been doing a lot more strict due diligence because of how common AI has become recently in the platforms that we use on a day-to-day basis.
Lindsey O’Donnell Welch: I think that brings up a good point which is, a lot of companies I talked to are saying, “We want AI, but we want to make sure that it solves a business problem that we have. We don’t just want it slapped onto a product.” As a CISO, when you’re looking at different things for AI, what sticks out to you where you say, “This could be something that is applicable and might be useful for an organization,” versus, “Okay, that seems like it’s more hype.”
Metin Kortak: I really see AI as an efficiency improvement. I think that if something is taking a long time manually, it can be likely done faster using artificial intelligence, which is why we started using AI for analyzing security logs and also identifying certain security incidents, because doing manual log reviews or reviewing certain systems manually, it just takes up a lot of time. And I think at the end of it this saves organizations a lot of money and resources because they can actually allocate those resources for solving better problems.
Lindsey O’Donnell Welch: Are there any trends related to AI and cybersecurity that you think are going to be big or something to keep our eyes on over the next year?
Metin Kortak: I would definitely keep your eyes open for any other cybersecurity regulations that are coming up. I think ISO 42001 has been becoming a lot bigger. We have a lot of customers asking us about that framework. We have already started working on that framework with some of our customers.
But on top of that we are expecting some additional cybersecurity frameworks and regulations to be released soon. So I think those should be definitely important to watch out for. Because we’re expecting that in the next couple of years, a lot of organizations are going to start requiring these frameworks if you’re utilizing an AI system. If you have not implemented these security controls or if you haven’t really followed the guidance from some of these cybersecurity frameworks, that means you might have a lot more work to do later down the line.
You can read the original article posted in Decipher Podcast, by Lindsey O’Donnell Welch and Metin Kortak.
About Rhymetec
Our mission is to make cutting-edge cybersecurity available to SaaS companies and startups. We’ve worked with hundreds of companies to provide practical security solutions tailored to their needs, enabling them to be secure and compliant while balancing security with budget. We enable our clients to outsource the complexity of security and focus on what really matters – their business. Contact us today to get started.
Interested in reading more? Check out more content on our blog: