You’ve just had your morning coffee, your Monday is off to a tiring start, and you log into a 9 am Zoom interview to vet a potential software engineering candidate. The candidate looks and sounds a little bit off but he at least seems to know his stuff, and it is a 9 am interview, after all.
You make the hire. On the new employee’s first day your security operation center catches them attempting to download intellectual property off of SharePoint they have no reason to access. Upon investigation, it turns out the “employee” is actually an agent of a hostile foreign government attempting to steal intellectual property.
This may sound like science fiction, but something eerily similar just happened to the security awareness training company KnowBe4. Change “hostile foreign power” to “North Korea” and “attempted to download SharePoint files” to “tried to spread malware through the network,” and you’ve just described the highly sophisticated attack carried out by a North Korean advanced persistent threat.
The advent of generative AI has opened up a dramatic range of new cyber options for malicious actors. Spoofed images, identities, videos, and even live calls with perfectly matching voices, are now within the realm of technical feasibility for sophisticated actors and will likely soon be for even remarkably unsophisticated threat actors.
We are very close to losing the ability to trust what we see and what we hear online. For businesses, the time to implement measures that mitigate generative AI security risks is now.
Security Controls Have To Keep Up: Think Like The Enemy
We must accept a new world where anything digital can be effortlessly faked, and ongoing geopolitical turmoil blends into cyberespionage. The proliferation of generative AI security risks requires a fundamental rethinking of the role of cybersecurity in an organization.
So, what is needed to meet this new reality with confidence?
Security Must Be A Function Of Corporate Governance
Security can no longer be an afterthought or a sub-department of information technology.
Instead, it needs to be a board-level topic and C-level function. Fortunately, many organizations and institutions are rapidly coming to this realization. For example, the NIST CSF V2.0 factors this in with its new governance function.
The NIST governance function stipulates that:
- Executives take on an active role in organizational risk management and collaborate on how potential cybersecurity gaps across the organization may impact broader objectives.
- Regularly hold dialogue among executives about risk management strategies, roles, and policies.
- Establish security goals tailored to the industry and organization at the executive level.
- Document and communicate security policies and expectations down the line to managers and individuals.
- Encourage collaboration at the executive level and across the organization about risk management strategies, including cybersecurity supply chain risk.
- Ensure cybersecurity risk management is explicitly rolled into overall Enterprise Risk Management (ERM).
The bottom line is clear: NIST recommends organizations build a culture of security, starting at the top.
Employee Vetting Is Increasingly Important To Combat Generative AI Security Risks
The rise of remote work since COVID has been enormously beneficial to many companies, individuals, and families, but it brings its own set of information security challenges.
Remote work is fantastic (Rhymetec is a 100% remote company!), but if you handle sensitive data or have sensitive intellectual property, it’s likely you need to do more extensive vetting of workers you haven’t met before. This includes taking proactive measures such as:
- Requiring all new hires or contractors with access to data to undergo extensive background checks (SOC 2 lists “evaluating individual backgrounds” as a requirement which is typically met with a background check).
- Consider requiring an in-person interview component for senior hires or those who will have access to particularly sensitive data.
- Conduct open-source research on potential hires. Do they have a digital footprint? Do their social media accounts check out? Is their information consistent across employment documentation and other sources?
- Conduct detailed background checks with former employers, including live phone conversations.
- Consider hiring only in known locales where you will have an easy time verifying employment history and background (for example, Rhymetec only hires U.S.-based staff).
While these measures aren’t foolproof (unfortunately, nothing is 100%), they dramatically reduce your organization’s risk of hiring a potential bad actor or foreign agent.
Train Users On Generative AI Security Risks and Threats
It is now trivially easy to clone an individual’s voice, create a highly personalized phishing email, or digitally alter images. However, many employees may not be aware that additional caution is needed when virtually meeting outside parties for the first time or taking phone calls that are allegedly from work colleagues.
Consider conducting specific training for your organization on how generative AI can be used for advanced impersonation techniques. This is a critical measure you can take to secure your remote workforce.
Practice The Principle of Least Privilege
It’s likely that in the relatively near future, individuals and companies will be regularly targeted by generative AI-enabled attacks.
From spear-phishing emails to voice cloning, it is a natural technology not only for nation-states but also for ordinary cybercriminals to leverage to carry out more effective attacks and thefts. To be realistic, some percentage of organizations (even those practicing good security!) will be compromised as a result.
The principle of least privilege and data segmentation is a critical facet of an effective defense. Data breaches can never be 100% prevented, but you can minimize the risk and mitigate the damage. NIST defines the principle of least privilege as:
A security principle that a system should restrict the access privileges of users (or processes acting on behalf of users) to the minimum necessary to accomplish assigned tasks.
Implementing the principle of least privilege dramatically reduces the risk of a major breach or other successful cyberattack against your organization due to generative AI security risks and other threats.
Follow A Framework or Standard
We’ve talked a lot about reducing the risk of generative AI threats to your business, and that is important. But in reality, many organizations today aren’t doing the bare minimum to protect accounts. GenAI deepfake voice phishing attacks are scary, but so are threat actors leveraging leaked credentials to simply log in to vulnerable accounts.
Rhymetec has extensive experience helping organizations meet a diverse range of cybersecurity standards and frameworks that are designed to maximally reduce the risk of major cyber events affecting your organization while minimizing disruption to core business practices.
Generative AI security risks include AI-enabled phishing attacks, deepfakes, voice spoofing, data poisoning, and automated cyberattacks. You can vastly reduce the risk of these threats by implementing an information security standard with gold-standard measures like employee training, patch management, data encryption, multi-factor authentication, and thorough vendor and employee vetting.
We recommend choosing an information security standard, requirement, or framework to meet, such as:
- SOC 2
- NIST 800-53
- ISO 27001
- NIST CSF V2.0
Frameworks and standards can help guide your organization in building an effective cybersecurity posture that reduces the risks of data breaches, ransomware attacks, insider threats, and, yes – deepfakes.
Approach Vendor Selection Strategically
It’s just as important to vet your potential vendors as it is to vet potential employees.
Many businesses nowadays opt to outsource their cybersecurity by working with a Managed Security Services Provider (MSSP) instead of hiring an in-house security team. Additionally, outsourcing complex services like penetration testing is the norm.
The following questions can help guide your decision when evaluating potential vendors:
1. Where is the vendor’s team based?
Location can influence their expertise on regional or industry-specific regulations.
2. Who exactly will be carrying out the work for you?
Are they permanent staff or contractors? Permanent staff may provide greater consistency and accountability compared to contractors. Be aware that some MSSPs do outsource a large portion of their services, so it’s critical to ask this question.
3. What are the team’s credentials and prior experience?
Ask for specific information, examples, and case studies that showcase their past projects and qualifications.
4. How much time will be allocated to this project?
Asking about the time commitment, particularly for services like penetration testing, helps form a complete picture of how comprehensive the project will be.
5. How will the final report work, if applicable?
For services like penetration testing, internal audits, and phishing testing & training, ask how the findings will be presented to you. This could give you a better understanding of where potentially sensitive information would be stored and who may have access to it.
Do they have a dashboard with an easy-to-navigate UI for you? Will you be provided with a detailed report conveying the findings and corresponding recommendations? Will you be given regular updates throughout the project or only a final report? Regular updates can be important for offerings like phishing testing services.
6. Do they outsource any services internationally?
If so, to where? A vendor’s control and oversight of the process may be impacted if they are outsourcing a large portion of the work, and different regions may have different regulations and standards. How does the vendor ensure quality and security despite outsourcing?
These are important questions to ask, especially when thinking about generative AI security risks.
In Conclusion: The Importance of Employee and Vendor Vetting In the Age of Generative AI Security Risks
Extremely thorough vetting of potential vendors and employees is absolutely essential as a proactive security measure, particularly with the advent of generative AI security risks.
No security program can ever be 100% foolproof, but fortunately, there are many measures you can take to substantially reduce risk. Hopefully, this guide provided actionable recommendations and strategic questions to ask as you are considering vendors and vetting employees.
As a fully remote company for nearly a decade, our team at Rhymetec has ample experience implementing these strategies and can provide further guidance. We act as a part of your team to provide fully managed services in areas including risk management, employee vetting and training, continuous monitoring, compliance automation, cloud storage security, and more.
About Rhymetec
Our mission is to make cutting-edge cybersecurity available to SaaS companies and startups. We’ve worked with hundreds of companies to provide practical security solutions tailored to their needs, enabling them to be secure and compliant while balancing security with budget. We enable our clients to outsource the complexity of security and focus on what really matters – their business. Contact us today to get started.
About the Author: Justin Rende, CEO
Justin Rende has been providing comprehensive and customizable technology solutions around the globe since 2001. In 2015 he founded Rhymetec with the mission to reduce the complexities of cloud security and make cutting-edge cybersecurity services available to SaaS-based startups. Under Justin’s leadership, Rhymetec has redesigned infosec and data privacy compliance programs for the modern SaaS-based company and established itself as a leader in cloud security services.
Interested in reading more? Check out more content on our blog.