The European Union’s ground-breaking Artificial Intelligence Act (“AI Act”) is entering the final phase of its legislative journey, with the European Parliament giving its approval last month. For organizations that develop or use AI, understanding this new framework has never been more urgent—particularly since certain provisions, including those on prohibited AI practices, will begin applying earlier than many other aspects of the Act.
Below,we explore why these new rules matter and when they will apply, let’s turn to the eight key practices the AI Act bans outright.
Implications of the EU AI Act
- Risk-Based Approach:
The AI Act adopts a risk-based system, dividing AI into three categories: prohibited, high-risk, and low/minimal risk. High-risk systems face stringent obligations (e.g., conformity assessments), while prohibited practices are simply barred from deployment in the EU.
- Penalties for Non-Compliance:
Organizations violating the prohibited practices risk large administrative fines—up to EUR 35 million or up to 7% of their global annual turnover, whichever is higher. EU institutions also face fines of up to EUR 1.5 million for non-compliance.
- Operator-Agnostic Restrictions:
The bans on certain AI uses apply to anyone involved in creating, deploying, or distributing AI systems—regardless of their role or identity. This approach ensures a broad application of the prohibitions and underscores the Act’s emphasis on safeguarding fundamental rights.
- Relationship to AI Models:
Prohibitions target AI systems rather than the underlying models. However, once a model—be it general-purpose or domain-specific—is used to create an AI system engaging in any prohibited practice, the ban applies. This distinction between “AI model” and “AI system” is crucial to avoid confusion around who bears responsibility when an AI solution transitions from research to a market-ready product.
- Future-Proofing AI Governance:
By instituting outright bans on certain uses and setting stringent standards for high-risk systems, the Act aims to mitigate risks and uphold core European values (e.g., dignity, freedom, equality, privacy). As AI evolves, the AI Act’s approach seeks to adapt and protect individuals from unethical or harmful applications.
Key Timelines: Gradual Application of the Act
The EU AI Act introduces a timeline for the implementation of prohibited AI practices. Here’s a table summarizing the key dates for the prohibition of certain AI systems:

Prohibited AI Practices Under the EU AI Act
While the AI Act sets rules for high-risk systems (imposing specific technical and operational requirements), it completely bans AI systems that pose an unacceptable level of risk to fundamental rights and EU values. These prohibitions are laid out in Article 5 and target AI uses that could:
- Seriously undermine personal autonomy and freedom of choice,
- Exploit or discriminate against vulnerable groups,
- Infringe on privacy, equality, or human dignity, or
- Enable intrusive surveillance with limited accountability.
Below are the eight key AI practices that the EU AI Act explicitly forbids:

1. Subliminal, Manipulative, and Deceptive AI Techniques Leading to Significant Harm
What is Banned?
Any AI system using covert, manipulative tactics (e.g., subliminal cues, deceptive imagery) that distort individuals’ behavior or impair their decision-making, potentially causing severe physical, psychological, or financial harm.
Why it Matters?
These practices strip individuals of free, informed choice. Examples might include streaming services that embed unnoticed prompts in content to alter viewer behavior, or social media platforms strategically pushing emotionally charged material to maximize engagement.
Important Nuance
AI in advertising is not outright banned; rather, advertising activities must avoid manipulative or deceptive methods. Determining where advertising crosses the line demands careful, context-specific analysis.
2. AI Systems Exploiting Human Vulnerabilities and Causing Significant Harm
What is Banned?
Any AI system that targets vulnerable populations—for instance, children, people with disabilities, or individuals facing acute social/economic hardship—and substantially distorts their behaviorin harmful ways.
Why it Matters
By exploiting intimate knowledge of vulnerabilities, such systems can invade user autonomy and lead to discriminatory outcomes. Advanced data analytics might, for example, push predatory financial products to individuals already in severe debt.
Overlap with Advertising
Highly personalized online ads that harness sensitive data—like age or mental health status to influence people’s decisions can be prohibited, particularly where they result in significant harm or loss of personal autonomy.
3. AI-Enabled Social Scoring with Detrimental Treatment
What is Banned?
Social scoring AI that assigns or categorizes individuals/groups based on social conduct, personality traits, or other personal factors, if it leads to:
- Adverse outcomes in unrelated social contexts, or
- Unfair or disproportionate treatment grounded in questionable social data.
Why it Matters
These systems can produce discriminatory or marginalizing effects, such as penalizing individuals for online behavior unrelated to their professional competence.
Permissible Exceptions
Legitimate, regulated evaluations (e.g., credit assessments by financial institutions tied to objective financial data) remain allowed, as they do not fall under the unacceptable risk category.
4. Predictive Policing Based Solely on AI Profiling or Personality Traits
What is Banned?
AI systems that try to predict criminal acts exclusively from profiling or personality traits (e.g., nationality, economic status) without legitimate evidence or human review.
Why it Matters
Such practices contravene the presumption of innocence, promoting stigma based on non-criminal behavior or demographics. The Act stands firm against injustice that arises from labeling or profiling individuals unfairly.
Legitimate Uses
AI used for “risk analytics,” such as detecting anomalous transactions or investigating trafficking routes, can still be permissible—provided it is not anchored solely in profiling or personality traits.
5. Untargeted Scraping of Facial Images to Build or Expand Facial Recognition Databases
What is Banned?
AI systems that collect facial images in an untargeted manner from the internet or CCTV to expand facial recognition datasets. This broad data collection, often without consent, risks creating mass surveillance.
Why it Matters
Preventing these invasive tactics is crucial for upholding fundamental rights like privacy and personal freedomThis aligns with the GDPR’s stance on the lawful processing of personal data, as demonstrated by GDPR-related penalties imposed on companies.
6. AI Systems That Infer Emotions in Workplaces and Education
What is Banned?
Real-time tools evaluating individuals’ emotions or intentionsvia biometric signals (e.g., facial expressions, vocal tone) in workplace or educationalsettings
Why it Matters
Such systems often rely on questionable scientific validity, risk reinforcing biases, and can produce unfair outcomes—for instance, penalizing employees or students for perceived negative emotional states.
Exceptions
Healthcare and safety use cases, where emotional detection is applied to prevent harm (e.g., driver fatigue systems), remain permissible.
7. Biometric Categorization AI Systems That Infer Sensitive Personal Traits
What is Banned?
AI systems assigning individuals to categories suggesting sensitive attributes—like race, religion, political beliefs, or sexual orientation—derived from biometric data (e.g., facial characteristics, fingerprints).
Why it Matters
Misuse of such categorization could facilitate housing, employment, or financial discrimination, undermining essential principles of equality and fairness.
Lawful Exemptions
Certain lawful applications may include grouping people by neutral attributes (e.g., hair color) for regulated, legitimate needs, provided these actions comply with EU or national law.
8. AI Systems for Real-Time Remote Biometric Identification (RBI) in Publicly Accessible Spaces for Law Enforcement
What is Banned?
AI performing real-time RBI (e.g., instant facial recognition) in public places for law enforcement purposes.
Why it Matters
This technology can severely infringe on privacy and freedoms, allowing near-instant tracking of individuals without transparency or oversight. It risks disproportionate impacts on marginalized communities due to inaccuracies or biased algorithms.
Exceptions
In narrowly defined scenarios, law enforcement may use real-time RBI to verify identity if it serves a significant public interest and meets stringent conditions (e.g., fundamental rights impact assessments, registration in a specialized EU database, judicial or administrative pre-approval). Member States can adopt additional or more restrictive rules under their national laws.
Preparing for Compliance and Avoiding Banned Practices
- Identify Potential Risks Early
Given the tight timeline for prohibited practices, organizations should swiftly assess their AI use cases for any red flags. This typically involves reviewing data collection methods, algorithmic decision-making processes, and user-targeting strategies.
- Build Internal Compliance Frameworks
Construct robust oversight structures—e.g., internal guidelines and approval flows for AI deployment. Ensure relevant teams (Legal, Compliance, IT, Product) cooperate to analyze potential risk areas.
- Consult Experts as Needed
Regulators expect vigilance. Independent audits or expert reviews can be invaluable in pinpointing non-compliant processes before they become enforcement issues.
- Consider the Full Lifecycle of AI Solutions
From concept to deployment and post-market monitoring, compliance must be ongoing. Banned practices can arise at any stage if AI systems inadvertently embed manipulative or discriminatory mechanisms.
Fusefy AI: Your Partner for Safe AI Adoption
Readying your organization for the EU AI Act is a complex process, given the 24-month grace period for most requirements and the shorter 6-month window for prohibitions. Proactive planning is essential to prevent reputational damage, regulatory scrutiny, and major fines.
- Discover Your Risk Profile: Try our EU AI Act Risk Calculator to see where your business may be exposed.
- Stay Ahead of Regulatory Curves: Schedule a call with Fusefy AI to learn how we can help you devise a compliance strategy that addresses both immediate and long-term challenges under the AI Act.
Conclusion
With the AI Act approaching full implementation, organizations must pay close attention to which AI systems are permitted and which are outright banned. By focusing first on the implications and timelines, it becomes clear that the EU intends to protect fundamental rights from high-risk, manipulative, or privacy-invasive AI applications. Aligning your AI roadmap with these evolving standards—especially for the soon-to-be enforced prohibitions will help ensure you remain compliant, competitive, and committed to responsible innovation.
While the EU seeks to lead in responsible AI governance, the U.S. is racing to solidify its global AI dominance through acceleration and investment. To know what is happening on the US part with regard to AI, read our latest blog Trump’s Latest AI Announcements: Stargate Project Ushers .