top of page

EU's AI Act and Its Consequences for Startups: Is Innovation at Risk?

Updated: Dec 10, 2024

Written by Alya Sezen (BSc Politics)


In an era where Artificial Intelligence (AI) is reshaping industries, the European Union's AI Act - a regulatory framework for AI development and use within its borders - is sparking both concern and intrigue to the public. As a matter of fact, the EU’s AI Act seeks to protect public interests, such as data privacy and security, while also establishing trustworthy, transparent and ethical AI practices. Nevertheless, concerns have resurfaced among European small - to medium-sized enterprises (SMEs) and European startups regarding the EU AI Act and its impact concerning innovation due to financial and regulatory burdens. As a result, the question remains: are these apprehensions justified or could the AI Act consist of a fundamental measure to establish ethical standards?



EU’s AI Act

The AI Act which was introduced in April 2021, selects and groups AI applications on the basis of their risk levels: minimal, limited, high, and unacceptable (European Commission, 2021a). The applications that are considered high-risk applications, for instance the ones in healthcare, infrastructure and education are exposed to specific and extensive requirements. To be more precise, the EU AI Act’s strict requirements include transparency regulations, specific data governance standards and the establishment of risk management protocols. Additionally, when concerning high-risk AI systems, the Act regulates provisions regarding human supervision, auditing and risk assessments (European Commission, 2021b). Non-compliance penalties are also strict as companies that violate the Act are subjected to fines of up to €30 million or 6% of their global revenue, depending on which one is higher (European Commission, 2021).


The EU AI Act and Its Consequences on Startups and SMEs

75% of AI innovation is considered to be driven by startups. However, the enactment of the EU AI Act and its extensive regulations creates various challenges for these companies, especially due to compliance related issues under the EU AI Act. According to the European Startup Network 60% of startups express a lack of preparation against the Act’s requirements and 45% expressed financial concerns regarding the Act (European Startup Network, 2021). In addition, according to The Centre for Data Innovation (2021), AI systems could cost an average of 400’000 euros annually to startups due to compliance related penalties. For instance, Aidence, an Amsterdam-based healthcare AI startup, could potentially be forced to spend 500’000 euros per year for compliance (DW, 2023).


Nonetheless, it is important to highlight that the EU AI Act’s main objectives are around establishing public trust, particularly regarding fields such as healthcare and finance.

Although the AI Act enforces strict requirements, one should consider the rationale behind these strict legislations. The EU aims to make AI a trustworthy tool by preventing technological misconduct that could lead to drastic results. For instance, research from 2022 revealed that over 30% of AI applications in critical sectors like finance and healthcare exhibited bias in decision-making (IDC, 2021). Hence, through the enactment of the AI Act, the EU seeks to reduce risks such as these.


Furthermore, in order to assist startups, the European Commission has included provisions for regulatory sandboxes within the AI Act. These sandboxes enable startups to experiment with their innovations without facing immediate regulatory constraints, enabling a gradual adjustment to compliance standards. Moreover, the Commission allocated €1 billion annually for digital innovation funding to help startups comply with these standards while advancing their research initiatives (European Commission, 2021c). A successful case regarding the use of this fund consists of Alice & Bob, a Paris-based startup, was able to use EU funding to increase its compliance efforts and innovation, thus, proving that compliance and innovation can coexist with the right resources and strategy.


Conclusion

The EU's AI Act poses risks and potentially drastic consequences for startups, particularly the ones developing high-risk AI applications. Specifically, the financial burdens caused by regulatory compliance penalties and the need for extensive auditing could cause a decrease of innovation. Nevertheless, by aligning AI development with ethical principles and offering support through funding, the EU provides a framework that could ultimately promote sustainable growth and trustworthy AI practices.

Comments


LSESU Think Tank | Best New Society 2024

bottom of page