Scylla logo
Book a demo
Is any regulation of AI in EU needed?
It's coming...

Is any regulation of AI in EU needed? It's coming...

Posted by Radosław Dzik

Radosław Dzik

PhD, Global Channel Partner Manager | Head of Sales Europe


The regulation of AI use is still in its early stages, but there are several legal frameworks and initiatives aimed at addressing the ethical concerns associated with AI.

One example is the General Data Protection Regulation (GDPR) in the European Union, which regulates the collection, storage, and processing of personal data, including by AI systems. The GDPR requires that individuals be informed about the use of their personal data and gives them the right to access and correct that data, as well as the right to have their data erased in certain circumstances.

Another example is the development of ethical guidelines and principles for AI. These guidelines are intended to inform the development and deployment of AI systems, and to ensure that they are designed and used in a responsible and ethical manner. Some examples of these guidelines include the Asilomar AI Principles and the AI Ethics Guidelines developed by the European Commission.

There are also initiatives aimed at promoting transparency and accountability in AI systems, such as the development of explainability standards and the creation of independent oversight bodies to monitor the use of AI.

In addition to these initiatives, there are also ongoing discussions and debates about the appropriate level of regulation for AI. Some argue that AI should be subject to strict regulation to protect against potential harms, while others argue that overly burdensome regulation could stifle innovation and development in the field. Ultimately, the regulation of AI use will likely involve a balance between these competing concerns.

Current works in EU

On 12 April 2021, the European Commission launched the EU AI Act (AIA) proposal. After months of internal negotiations, the EU Council adopted a common position in December 2022 - and the draft legislation took another step towards becoming the first attempt at AI legislation anywhere in the world. In its current form, the bill classifies threats, dividing AI applications into three categories:

● prohibited, ● high-risk, ● low-risk

The AIA is designed to introduce a common regulatory and legal framework for artificial intelligence and foster “trustworthy AI”. In doing so, it encompasses all sectors and all types of artificial intelligence. The ultimate aim is to ensure that AI systems are safe and respect existing laws on fundamental rights and EU values.

The first category relates to any subliminal, manipulative or exploitative systems that cause harm; real-time remote biometric identification systems used in public spaces for law enforcement; and any form of social scoring. All of these would be prohibited by the Act.

The second category includes applications such as systems that assess a consumer's creditworthiness, CV scanning tools that rank job applicants, or any systems used in the administration of justice. As such, these would be subject to the largest set of requirements.

The third category includes technologies such as AI chatbots, gaming, inventory management systems and most other forms of AI. These systems would be subject to far fewer requirements, primarily transparency obligations such as making users aware that they are interacting with a machine.

Technology or use regulation?

The aim of the regulation is very laudable, something along the lines of the famous (infamous) GDPR act regulating privacy in 27 EU countries. The only question that arises is whether the legislation will keep up with the development of technology?

In my humble opinion, the lack of precise definitions, the inadequate scope and the practicality of application in the real world mean that this will tend to be a source of problems, over interpretation and often an excuse in not using or excluding certain technologies. This can disrupt the sustainability of competing businesses.

It seems that this will be done inevitably, but it has to be done with an idea. Many technologies can be used in a very positive way, they can serve humanity (that's what we do it for) and at the same time, some uses can be sensitive or even dangerous, overstepping individual freedom and rights. We need to regulate the uses, not the enabling technology.


Ethical concerns by using AI

The use of AI raises a number of ethical concerns, some of which are discussed below:

Bias: AI systems can reflect the biases of the people who develop and train them. This can result in unfair or discriminatory outcomes, especially in areas such as hiring, lending, and criminal justice.

Privacy: AI systems can collect and analyze vast amounts of data about people, raising concerns about privacy and the potential for misuse of personal information.

Accountability: AI systems can make decisions that have significant consequences, but it can be difficult to assign responsibility for these decisions when they are made by algorithms rather than humans.

Transparency: AI systems can be complex and opaque, making it difficult to understand how they make decisions and to identify and address any errors or biases in their decision-making.

Autonomy: As AI systems become more advanced, they may make decisions and take actions without human intervention, raising questions about the appropriate level of human oversight and control.

Job displacement: AI systems can automate tasks that were previously performed by humans, leading to job displacement and potential economic disruption.

Security: AI systems can be vulnerable to hacking and other forms of cyber attacks, potentially leading to significant damage or harm.

Do we need the regulation at all?

Overall, the ethical concerns surrounding AI highlight the need for careful consideration of the potential risks and benefits of these technologies, as well as the development of appropriate policies and regulations to mitigate potential harm.

Again, we need to regulate the uses, not the enabling technology, unless it contains a bias from the design itself.

Scylla AI ethics

Scylla is a protective intelligence suite that aims to prevent crime before it happens. It is a revolutionary security system developed based on artificial intelligence and machine learning. Its goal is to detect and prevent crime and violence in public areas using existing CCTV cameras.

At Scylla, we realize that with the power of AI comes the responsibility to use these capabilities wisely and ethically. Which is why we adopted ethics by design to address any potential issues of ethics in the early development stage. We deliberately built ethnically and gender-balanced datasets in order to eliminate bias in face recognition. We do not store any data that can be considered personal. What’s more, in jurisdictions where face recognition is not permitted for privacy reasons, Scylla’s person search technology can be used to search for someone based on their general appearance.

Scylla is laser-focused to use advanced AI-powered video analytics to protect human life. We do really care who uses our software, and we don’t sell it to anyone who will use it to oppress a particular group, breach human rights, or act in a way that is detrimental to society.

Stay up to date with all of new stories

Scylla Technologies Inc needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.

Related materials

Scylla is AICPA certified
Scylla is ISO certified
GDPR compliant

Copyright© 2024 - SCYLLA TECHNOLOGIES INC. | All rights reserved