Blog March 20, 2024

Harnessing the Power of Artificial Intelligence: A closer look at the European Union’s new landmark legislation

by Brendan Peter, VP, Global Government Affairs, SecurityScorecard
by Brendan Peter, VP, Global Government Affairs, SecurityScorecard

Artificial intelligence (AI) has become one of the most transformative forces of our time. From the mundane tasks of everyday life to the complexities of global industries, artificial intelligence continues to permeate every aspect of society, reshaping how we live, work, and interact. The growing importance of AI is not just a trend but a fundamental shift in the way we perceive and leverage technology. This is particularly true in the cybersecurity industry. Already we’re seeing novel deepfakes and better phishing templates emerging from large language models. And in the short-term, many expect that attackers will have the upper hand over defenders when it comes to AI and cyber resilience.

The European Union takes action

Ethical implications of AI, where and how to use the technology, and understandability demand careful consideration and regulatory oversight to ensure fairness, accountability, and transparency. Against this backdrop, last week lawmakers in the European Union approved a first-of-its-kind law that will govern how businesses and organizations in the EU use artificial intelligence (AI). 

The landmark EU Artificial Intelligence Act categorizes AI systems into four levels of risk, and depending on the level of risk, each category has corresponding regulatory requirements aimed at ensuring adequate oversight. The levels of risk are: minimal risk, limited risk, high risk, and unacceptable risk. 

Any AI system categorized as an “unacceptable risk” will be banned in the EU. These systems consist of:

  • Systems that manipulate people’s behavior;
  • Emotion-recognition systems used in workplace or education settings;
  • Biometric-categorization systems that infer characteristics like a person’s religious beliefs, political opinions, or sexual orientation

The global impact of the AI Act

These regulations don’t only apply to businesses based in the EU. In fact, organizations in the U.S. that develop or provide AI systems on a global scale should ensure that they comply with the provisions in this new piece of legislation. 

As with other major policy areas, the EU has set a new standard in technology governance. Though efforts have been made here in the US at a state level, along with various executive orders from the Biden-Harris administration, Congress had yet to come to a consensus on rules to govern the rollout of AI across our society.  While some are quick to label AI as the “terminator,” as others on the SecurityScorecard team have put it, we feel like AI is more like a “teenager” right now—it makes mistakes, but it’s immature, and needs oversight. 

When it comes to artificial intelligence, there’s a fine line between encouraging innovation and endangering national security. As such, it’s imperative to have guardrails in place to not only harness this technology, but protect consumers and ensure a safer society. The fields of healthcare and financial services have clear regulations in place to safeguard all parties, and AI should be no different. Now that the EU has taken the first step, we expect more national legislation to follow. Getting things right is vital and we look forward to engaging with policymakers across the globe to ensure the promise of AI is appropriately harnessed.

 

Assess cyber risks and make informed decisions with confidence, every time