How does the EU AI Act affect AI Investing?

February 26, 2024
Image of the EU AI Act produced by DALL·E

Summary

AI Investing applications for alternative asset managers are generally considered low-risk applications with minimal transparency requirements within the Act.

Social scoring systems to access financial resources are considered high-risk practices but those implemented by asset managers, for example AI scoring of founder profiles, would be exempt if two conditions are met:

The Act will require the base model vendors such as OpenAI, Meta or Google to guarantee the compliance of the models with the EU regulations. The act does not retroactively regulate existing models, only future versions.

The Act probably won’t take effect until 2025.

The AI Act is coming

On December 9 2023, the EU Parliament reached a provisional agreement with the Council on the AI Act. The agreed text will now have to be formally adopted by both the Parliament and the Council to become EU law. The Act is a framework regulating AI applications in the EU, aiming to protect the rights of EU citizens from malicious uses and potential risks, similarly to how the GDPR addressed data collection systems. Based on historical patterns, the Act probably won’t take effect until 2025.

What institutional investors need to know when implementing AI systems

AI systems used in asset management, including those for private equity, venture capital, and private credit, are not considered high-risk applications in the final version of the AI Act.

Systems classified as limited-risk only need to ensure that users are informed when they are interacting with AI. For example, an AI agent engaging with founders during a Q&A session should disclose to the founders that they are interacting with AI. For internally used systems, it's typically standard for employees to be aware that they are interacting with an AI application.

However, in cases where a large asset manager employs AI solutions that impact consumers' access to essential services such as housing, telecommunications, or credit, these applications could be considered high-risk. In such instances, the asset manager should thoroughly investigate the application to determine how to avoid a high-risk classification, if possible.

The social scoring question

The AI Act aims to regulate AI applications that employ social scoring for making decisions affecting consumers' access to basic necessities, potentially exacerbating discriminatory patterns. Social scoring in the context of the Act is defined as AI systems that evaluate or classify the trustworthiness of natural persons based on their social behavior in multiple contexts or known or predicted personal or personality characteristics. 

The use of social scoring to rank founders and managers based on their profiles is common practice among institutional investors in VC and PE. There is a number of studies that show correlations between founder’s backgrounds such as education and work experience and the probability of success of their startups. Other papers have shown that the network of the founders plays a relevant role too. The question arising in such cases is if AI systems can perpetuate past discriminatory patterns by using historical data to predict future returns.

In any case, to avoid slowing down the path of innovation, the AI Act exempts private small-scale providers that use AI applications internally even if they perform social scoring. As I understand it, asset managers who qualify as SMEs and use these applications for internal purposes are exempt from the EU AI Act's regulatory requirements. It's important to note that other obligations, such as those under GDPR, still apply when processing personal data.

My recommendations

When advising EU-based institutional investors who are exploring AI applications within their firms, I always recommend the following steps:

  1. Make sure to fulfill the minimum transparency requirements of the AI Act. If you are a large scale AM investing in housing, infrastructure or credit take a closer look at the regulatory requirements.
  2. Verify GDPR compliance, especially if your operations extend across the EU. It's essential to understand the specifics of the data you're processing, its storage locations, and to confirm that your vendors have appropriate Data Processing Agreements (DPAs) in place.
  3. Engage your Environmental, Social, and Governance (ESG) team to monitor and address any discriminatory patterns that may arise from AI usage.
  4. Thoroughly document your AI applications and develop a Frequently Asked Questions (FAQ) document for both your limited partners (LPs) and for the regulatory bodies.

Guillem Sague, CEO @ CarriedAI

← View all posts