The NY DFS’ recent guidance on AI-related cyber risks is a significant development for financial businesses. It provides a robust framework to address emerging threats, particularly those related to AI. Businesses can effectively mitigate risks and enhance security by integrating AI into cybersecurity strategies and complying with 23 NYCRR Part 500.
On October 16, 2024, the New York State Department of Financial Services (DFS) issued guidance to address AI-driven cybersecurity risks in the financial services industry. This is a recent and significant part of DFS’s ongoing effort to enhance cybersecurity protections under 23 NYCRR Part 500.
While Artificial Intelligence (AI) offers advantages, it also creates new vulnerabilities that can be exploited. It is imperative that financial businesses regulated by DFS understand these risks and integrate AI into their cybersecurity strategies. This understanding empowers decision-makers and enhances their confidence in managing potential threats.
This blog will summarize the key components of DFS’s guidance and provide actionable insights for small and medium-sized businesses (SMBs) to strengthen their cybersecurity frameworks against increasing AI-related risks.
The Dual Role of AI: Risk and Solution
AI is rapidly transforming the cybersecurity landscape. On the one hand, it enhances threat detection, automates responses, and streamlines operations, giving cybersecurity professionals powerful tools to combat evolving threats. On the other hand, it also empowers cybercriminals by enabling more sophisticated phishing schemes, creating convincing deepfakes, and accelerating malware deployment.
Recognizing AI’s dual role, DFS’s 11-page guidance outlines the cybersecurity risks posed by AI and provides strategies to mitigate them. Notably, this guidance builds on the existing 23 NYCRR Part 500 framework without imposing new regulatory requirements.
Key Cybersecurity Risks Related to AI
AI-Enabled Social Engineering
AI has supercharged traditional social engineering attacks like phishing, vishing, and deepfakes. These sophisticated, AI-driven attacks are particularly dangerous for financial businesses because non-public information (NPI) is high value.
Threat actors now use AI to create highly realistic impersonations—be it through video, audio, or text—to deceive employees into divulging sensitive information or transferring funds. These impersonations, known as ‘deepfakes,’ can mimic the appearance or voice of trusted individuals, potentially bypassing biometric authentication systems and gaining unauthorized access to critical systems.
The DFS guidance underscores the alarming rise of AI-enabled social engineering and emphasizes that financial businesses must remain vigilant, highlighting the situational urgency for financial services decision-makers.
AI-Enhanced Cyberattacks
The DFS also points to the acceleration and amplification of AI-powered cyberattacks. Threat actors can use AI to scan vast amounts of data, identify vulnerabilities, and deploy ransomware or malware more effectively. AI’s ability to generate new malware variants and bypass traditional defenses, such as antivirus software, poses a growing threat to financial businesses. Furthermore, AI tools lower the technical barriers for malicious actors, allowing even those with limited skills to launch sophisticated attacks.
In response to this evolving threat landscape, DFS emphasizes the importance of continuous learning and adaptation in cybersecurity. Financial businesses must continually update their cybersecurity programs to reflect the increased potency of AI-enabled attacks. This proactive approach should make financial services decision-makers feel empowered and in control of their company’s cybersecurity posture, knowing they are not just reacting to threats but actively preparing for them.
Data Exposure and Theft of NPIs
AI-driven systems typically require large-scale data collection, often including sensitive NPI. This increases the attack surface for financial businesses, making them more attractive to threat actors seeking financial gain through data breaches. Additionally, AI systems often store and utilize biometric data, which introduces further risk. If compromised, biometric data, such as fingerprints or facial recognition templates, can create convincing deepfakes, enabling unauthorized access to critical systems.
The DFS guidance states that businesses must adopt more robust data governance practices to mitigate this risk. These include minimizing data collection, disposing of unnecessary NPI, and securing biometric data against theft.
Third-Party and Supply Chain Dependencies
AI introduces complexities into the vendor ecosystem. Financial businesses often rely on third-party service providers (TPSPs) for AI-powered tools, such as fraud detection systems or customer service chatbots, which increases their exposure to supply chain vulnerabilities. If an AI-powered tool or vendor is compromised, it can serve as a gateway for attackers to infiltrate the institution’s systems, leading to significant NPI breaches.
The DFS guidance calls for enhanced due diligence in vendor management. Financial businesses should ensure that their TPSPs follow robust cybersecurity practices, including safeguarding AI-enabled products and services, such as regular security audits, encryption of sensitive data, and employee training on cybersecurity best practices. Contractual obligations for data protection, timely incident notifications, and periodic audits are essential to mitigate third-party risks.
DFS’s Recommended Controls to Mitigate AI-Related Threats
Risk Assessments and Cybersecurity Programs
The cornerstone of the DFS guidance is the need for financial businesses to integrate AI-specific risks into their cybersecurity risk assessments. These assessments should account for the organization’s use of AI, potential vulnerabilities, and threats from third-party AI applications. Risk assessments must be updated annually or whenever significant changes occur, ensuring that AI-related risks are consistently addressed.
Compliance with 23 NYCRR Part 500 remains paramount, and businesses should regularly evaluate and adapt their cybersecurity policies in response to emerging AI risks.
Third-party and Vendor Management
Due diligence is critical for AI-powered tools used by third-party vendors, such as fraud detection systems or customer service chatbots. Financial businesses must ensure that TPSPs incorporate robust AI-related security measures. DFS recommends embedding contractual obligations for data protection and incident reporting, especially concerning AI-powered threats.
Businesses must also require their vendors to notify them promptly of any cybersecurity event that could impact their data or systems, particularly events stemming from the vendor’s use of AI.
Access Controls
Effective access control measures, such as multi-factor authentication (MFA), are essential defenses against AI-enhanced impersonation and social engineering attacks. DFS stresses that businesses should move beyond traditional biometric authentication and adopt advanced technologies like ‘liveness detection,’ which can differentiate between real individuals and AI-generated deepfakes by verifying the user’s ‘liveness’ during the authentication process.
Businesses should also implement role-based access controls, ensuring users only have access to the NPI necessary for their roles. Access privileges should be regularly audited to ensure continued compliance.
Cybersecurity Awareness Training
DFS emphasizes the importance of mandatory AI-focused cybersecurity training for all personnel, including senior leadership. As AI continues to advance, so do cybercriminals’ tactics. Training programs must evolve to include simulations of AI-driven social engineering attacks, preparing staff to recognize and respond to these sophisticated threats.
Moreover, the guidance highlights the importance of cultivating an organization’s cybersecurity awareness culture. Senior leadership must actively promote and engage in cybersecurity efforts to instill best practices throughout the organization. This emphasis on a robust cybersecurity awareness culture makes staff feel engaged and committed to the organization’s security, knowing that their vigilance is crucial to the defense against cyber threats.
How AI Improves Cybersecurity Defenses
Despite its risks, AI can be a powerful tool in enhancing cybersecurity defenses. AI can be leveraged to improve threat detection, automate routine tasks, and analyze behavior for anomaly detection. Financial businesses can use AI to predict emerging threats and improve cybersecurity resilience by proactively identifying vulnerabilities and enhancing incident response capabilities.
AI can also help streamline complex cybersecurity processes, such as log analysis and security alert reviews, allowing human teams to focus on high-priority tasks.
Furthermore, AI’s predictive capabilities enable businesses to forecast potential threats, helping them stay ahead of evolving cyber risks.
Future Considerations for AI and Cybersecurity
As AI technologies evolve, so will the associated cybersecurity risks. Financial businesses must regularly review and update their cybersecurity programs to address new AI-related threats. Continuous collaboration between compliance experts and corporate cybersecurity teams will be essential to maintaining a solid regulatory and security posture in an AI-driven world.
Moreover, SMBs must recognize the growing importance of AI in both threat mitigation and generation.
As AI becomes more sophisticated, adopting equally sophisticated defenses and staying vigilant against cybercriminals’ evolving tactics is imperative. Staying ahead of AI-driven threats is crucial for the financial sector’s long-term resilience.