Leading Independent Hong Kong Law Firm

SFC Regulation of Generative AI Language Models in Hong Kong Securities Markets

Dec 6, 2024
In November, 2024, the SFC issued guidance to licensed corporations setting out regulatory standards for the use of generative artificial intelligence (AI) models in the provision of investment services. In this article, we provide a brief overview of the guidance. If you’d like more information about the regulations governing the use of generative AI within an SFC licensed corporation, please contact one of our financial services regulatory lawyers.
VIEW ARTICLE

"Leading Practice"

"Exceptionally Talented"

"The Choice for Sophisticated Clients"

"Leading Lawyer"

"Leading Practice"

"Global Leader"

December 6, 2024
By Timothy Loh
 

On November 12, 2024, the Securities and Futures Commission ("SFC") issued a circular to licensed corporations ("LCs") regarding the responsible use of generative AI language models ("AI LMs") in their businesses. The adoption of AI LMs is increasing, offering efficiencies in tasks like client interaction, research, and software development. However, the SFC's circular highlights several risks associated with their use and provides expectations for LCs to mitigate those risks. The SFC's expectations and requirements focus on governance, model validation, cybersecurity, third-party risk management, and the protection of client interests.

Benefits and Risks of AI LMs

AI LMs can improve operational efficiency but also present new risks, including:

  • Inaccurate or biased outputs (e.g., hallucinations, biases in training data);
  • Cybersecurity vulnerabilities (e.g., data breaches, attacks); and
  • Over-reliance on AI outputs without human oversight.

Risk Management Framework

LCs are expected to:

  • Implement effective governance: Senior management should oversee AI LM use, ensure appropriate policies, and ensure staff have expertise in AI and risk management;
  • Conduct model validation: AI LMs should undergo thorough testing for performance, cybersecurity risks, and output accuracy, particularly in high-risk applications like investment advice; and
  • Monitor AI LM performance: Ongoing testing and oversight are required to ensure AI LMs continue to function as intended, especially as external conditions change.

High-Risk Use Cases

Using AI LMs for investment-related tasks (e.g., generating recommendations or advice) is considered high-risk. LCs must take additional precautions:

  • Human oversight is needed to review AI-generated outputs;
  • Accuracy testing to ensure AI outputs meet required standards; and
  • Disclosures to clients about the involvement of AI and potential inaccuracies.

Cybersecurity and Data Management

LCs must:

  • have robust controls to protect against cybersecurity threats, including adversarial attacks; and
  • manage AI training data to prevent risks such as data leakage or violations of personal data privacy laws.

Third-Party Risks

If an LC relies on external providers for AI LMs, it must perform due diligence to ensure these providers meet regulatory and operational standards. The LC remains responsible for managing risks even if the AI LM is developed externally.

Notification Requirements

LCs using AI LMs in high-risk areas must notify the SFC of any significant changes in their business, especially during the development phase, to avoid regulatory issues.

Implementation

The circular takes immediate effect, and LCs are encouraged to review their internal controls and policies to comply with these expectations. The SFC will assess compliance pragmatically, recognizing that some firms may need time to adapt.

Conclusions

Whilst the SFC supports the use of generative AI LMs to enhance the provision of financial services by LCs, it requires LCs to take a careful, risk-based approach to ensure that these tools are used responsibly.

We use cookies to enhance your experience of our websites and to enable you to register when necessary. By continuing to use this website, you agree to the use of these cookies. For more information and to learn how you can change your cookie settings, please see our Cookie Policy and our Privacy Notice.