An Interview With Metin Kortak, Rhymetec CISO, & Cynthia Corsetti On The Impact of AI.
“Have human oversight during AI-assisted development.
AI can and is helping software developers across the globe build software, identify code-related issues, and assist with overall development. Although the results can be very useful, AI is not flawless and there may be mistakes. Which is why it’s so important to review results from generative AI tools before using them in practice.”
Metin Kortak Of Rhymetec: How AI Is Disrupting Our Industry, and What We Can Do About It
Thank you so much for joining us in this interview series. Before we dive into our discussion our readers would love to “get to know you” a bit better. Can you share with us the backstory about what brought you to your specific career path?
Having a technical background in computer science, I came across an opportunity to work for a small cyber security company in New York City. This opportunity included working on cyber security projects for SaaS companies offering FinTech, CRM, and health care services. At first, this was meant to be a small contracting job, but my clients were incredibly happy with the services I provided — so much so, that they referred our company to several other businesses. In fact, the majority of our clients came through referrals from the network of our clients and partners in the beginning. These connections and the experience led me to Rhymetec, where I joined as the second employee and a partner to build my own department. Six years later, Rhymetec successfully provides cyber security and compliance services to hundreds of businesses across the globe.
What do you think makes your company stand out? Can you share a story?
Rhymetec is and has been 100% bootstrapped from the beginning. This has allowed the executive team to direct all of our focus on improving the quality of our services and our customer satisfaction. The reality is, taking an investment is a full time job and our energy is better spent on the company itself and not searching for funds.
You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
- Patience. In the early stages of our business, we only had three clients that we were working with, and this was the case for at least half a year. Growth comes slowly but when it does, it can be overwhelming. We had to scale for high growth while maintaining our patience when we only had a few clients.
- Hard Work. I remember needing to often work at our client offices until 1–2 a.m. in the morning due to issues related to their network firewalls or conducting penetration tests. There have been many all-nighters just because we didn’t have enough resources to complete some projects, but at the end of it we always did because we kept believing in what this company could become.
- Trust. As a cyber security company, our top priority is the security of our customers. Customers trust us and we trust our employees to protect and secure our customers. We put a lot of trust in our employees because we see them as part of this business and not as disposable assets. We are also very transparent with our employees and with that comes a lot of trust. I think it’s important to have mutual trust in this business because we value long term relationships with our employees, and we believe that’s what drives our business forward.
Let’s now move to the main point of our discussion about the impact of AI. Can you explain how AI is disrupting your industry? Is this disruption hurting or helping your bottom line?
Artificial intelligence (AI) has been one of the hottest topics to discuss since the emergence of ChatGPT, and it has made its way into the consumer space. My stance on Artificial Intelligence may be considered controversial. Artificial Intelligence has massive potential, from creating art to solving complex world issues. However, we see SaaS businesses jumping on the bandwagon and offering services utilizing AI without much due diligence or further thought into integrating AI with their product. I think the reason why we’re seeing the emergence of AI is due to the competitive space. It doesn’t matter how well it works, by offering AI services, you are trying to stay one step ahead of the game from your competitors.
The cold reality is, AI is growing faster than us. As a cyber security professional, I can’t underestimate the consequences on the security and privacy of customer data. Most of us do not understand how AI works because it is designed to be constantly evolving and learning from different data inputs.
Our approach to cyber security is compliance and regulation based. The work doesn’t stop there, but one of the first steps in building a strong information security program is to ensure compliance with applicable laws and regulations. There are currently no laws or regulations to protect the use of Artificial Intelligence, and that alone is a concern on its own.
Which specific AI technology has had the most significant impact on your industry?
Generative AI has had the most impact on not just our industry, but also the consumer space. It has revolutionized the way we approach creativity, problem-solving, and communication. Its ability to generate content, whether in the form of text, images, or even multimedia, has redefined possibilities and opened up new horizons for innovation. The dynamic nature of generative AI has not only streamlined processes within our industry but has also created more engaging and personalized experiences for consumers, leading to a paradigm shift in how we interact with technology. As we continue to witness advancements in this field, the influence of generative AI is bound to extend even further, shaping the future landscape of both my industry and consumer interactions.
Can you share a pivotal moment when you recognized the profound impact AI would have on your sector?
A couple of months ago, during a podcast I was a guest on, we discussed the potential impact of AI on the cyber security industry. It was brought up that some vulnerability scanning tools were considering utilizing AI to understand security vulnerabilities and conduct improved penetration tests. This can be a great tool to identify security vulnerabilities before an attacker does. However, if AI can be used for defensive security, it can most likely be used as an offensive if it was in the hands of the wrong people.
In a world where we rely on technology to harvest our crops, perform surgeries, secure our finances, and keep hospice patients alive, the impact of using AI as an offensive form of cyber attacks can have devastating effects.
How are you preparing your workforce for the integration of AI, and what skills do you believe will be most valuable in an AI-enhanced future?
At Rhymetec, we have banned the use of public consumer-facing generative AI tools such as ChatGPT, and conduct several cyber security awareness training to ensure our employees understand the importance of following our information security policies.
However, in an AI-enhanced future, valuable skills will include technical proficiency, adaptability, critical thinking, creativity, interpersonal skills, and ethical awareness. These attributes will enable individuals to collaborate effectively with AI, contribute innovative solutions, and navigate ethical considerations in a rapidly evolving technological landscape.
What are the biggest challenges in upskilling your workforce for an AI-centric future?
As a cyber security business, the security of our clients come at the forefront of our objectives. We are doing the following:
- Conducting a comprehensive vendor security analysis on behalf of our customers. Many SaaS products have already enabled AI services without notifying their customers. Just as I mentioned earlier, these are the companies that jumped the bandwagon and due diligence of these vendors is crucial to protecting our customers.
- Creating Artificial Intelligence security policies. Many businesses do not currently have internal policies around how to protect themselves from the risks of artificial intelligence. This is where we play our part and help our customers build applicable internal policies.
- In order to plan for the future, we are closely monitoring any laws, regulations, and compliance frameworks that focus on AI security. On December 8, 2023, the European Union agreed on the “A.I. Act.” According to the New York Times, this is one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.
What ethical considerations does AI introduce into your industry, and how are you tackling these concerns?
Data privacy is one of the top concerns that comes with the emergence of AI. AI uses algorithms to create responses and perform various activities. The algorithm learns from different data inputs, which in some cases include personally identifiable information (PII), financial data, and even electronic protected health information (EPHI). The lack of control over these data sets is problematic and may be already violating certain privacy laws such as GDPR and CCPA. It was already very difficult to provide full control over sensitive information to data subjects without Artificial Intelligence, and I can’t imagine it is easier or possible now.
What are your “Five Things You Need To Do, If AI Is Disrupting Your Industry”?
1. Isolate your AI systems
If you are using or building generative AI tools, maintaining control over your data is crucial and can be confusing. In order to tackle this issue, many organizations use self-hosted Large Language Models (LLMs). Large Language Models, also known as “LLMs” are deep learning data models pre-trained on vast amounts of data to recognize and generate content. Generative AI applications are built on top of large language models (LLMs) such as ChatGPT. When using generative AI systems, using self-hosted LLMs can protect the privacy of your customers’ data and give you more control over the data you receive as LLMs improve with more data.
2. Do not train public AI Tools with your data
Although this is new, some AI systems now give you the option to opt out of model training. This means your data will not be used for model training generative AI tools like ChatGPT. More importantly, organizations also have the option to opt out of model training if they are utilizing Open AI’s APIs for building their own generative AI tools.
3. Have human oversight during AI assisted development.
AI can and is helping software developers across the globe build software, identify code related issues, and assist with overall development. Although the results can be very useful, AI is not flawless and there may be mistakes. Which is why it’s so important to review results from generative AI tools before using them in practice.
4. Test your AI systems against OWASP Top 10 for LLM
As with all applications, testing for security vulnerabilities is crucial, and in some cases required by many compliance frameworks. The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs). The project has created a list of the top 10 security vulnerabilities when working with LLMs.
5. Create an AI security policy
Even if you do understand the security implications of AI, your employees may not. As generative AI tools become more consumer facing, employees often use tools like ChatGPT to assist them with their tasks. What happens if an employee inputs credentials or customer data? Samsung had to learn this lesson the hard way after an employee leaked sensitive code through ChatGPT. Having a strong AI policy written and implemented can ensure incidents like this don’t happen in the future and that your employees are trained on the security best practices of using AI tools.
What are the most common misconceptions about AI within your industry, and how do you address them?
A lot of people think that AI is biased and doesn’t yield satisfactory results. AI is not a person, it’s an algorithm generating answers and completing tasks based on data it’s provided with. If results are not satisfactory, that is likely due to issues with the data AI is provided to be trained with. AI systems learn from historical data, which may unintentionally embed existing biases present in that data.
To address this concern, it’s essential to educate the public about the nature of AI and its dependency on the quality and diversity of the training data. If the results appear biased or unsatisfactory, it is more likely a reflection of the biases present in the data used for training. In essence, addressing the misconception involves a combination of education, transparency, and a commitment to refining AI models to align with ethical and unbiased standards, reinforcing the understanding that AI’s performance is intricately tied to the quality and fairness of the data it learns from.
Can you please give us your favorite “Life Lesson Quote”? Do you have a story about how that was relevant in your life?
“Live like there is no tomorrow!” It may sound like a cliché, but it’s more relevant now than ever. My life is busy. I reside in New York City, and travel frequently for work. I have to always have an organized schedule to keep up with my lifestyle, and I have found it is crucial to prioritize being fully present. I realized the key to a happier life is to be more fully present. Everyday I try to adopt habits to be more present in life, like using less social media, taking a walk without my phone, or having small talk with co-workers. These intentional practices contribute to a more balanced and ultimately happier lifestyle.
Off-topic, but I’m curious. As someone steering the ship, what thoughts or concerns often keep you awake at night? How do those thoughts influence your daily decision-making process?
The responsibility of overseeing the technological landscape and security postures of organizations requires a heightened awareness of evolving threats and vulnerabilities. Each day, my efforts are dedicated to assisting an increasing number of organizations fortify their security postures, and the knowledge that another business is more secure as a result of my team’s efforts brings me greater peace of mind at night.
You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂
A movement advocating for enhanced laws and protections around consumer privacy would bring substantial benefits for consumers. We live in an era dominated by digital interactions, and individuals are increasingly vulnerable to the misuse of their personal information. Strengthening privacy regulations would establish clear boundaries, ensuring that companies handle consumer data responsibly and ethically.
How can our readers further follow you online?
You can follow my work on Linkedin.
Thank you for the time you spent sharing these fantastic insights. We wish you only continued success in your great work!