Back to blog
April 14, 2026

EU AI Act: A Practical Guide for Spanish Businesses

EU AI Act: A Practical Guide for Spanish Businesses

Classification and Timelines: The Regulatory Framework for AI in Spain

The European AI Act establishes a risk-based legal framework for any company marketing or utilizing artificial intelligence systems within the European Union. Obligations are divided into four categories: unacceptable risk (prohibited), high risk (subject to strict governance and transparency requirements), limited risk, and minimal risk. The implementation calendar is critical: prohibitions on banned systems take effect in February 2025, rules for Generative AI in August 2025, and the bulk of obligations for high-risk systems in August 2026.

This regulation affects not only developers but also professional users (deployers) who integrate these technologies into their business processes. For a Spanish company, compliance requires data audits, exhaustive technical documentation, and post-market monitoring systems. Ignoring these deadlines can result in fines of up to €35 million or 7% of total global annual turnover, depending on the severity of the infringement and the size of the organization.

Categorizing Systems According to Risk Level

The first step for any General Counsel or CTO is to conduct an inventory of AI systems currently in use or under development to classify them according to the European AI Office's criteria. This classification determines the level of investment required for compliance and oversight.

Unacceptable risk systems are those that pose a clear threat to safety or fundamental human rights. This includes cognitive manipulation techniques, social scoring by governments, and certain uses of real-time biometric identification in public spaces. These systems must be decommissioned before February 2025.

The high-risk category is the one most likely to impact the Spanish business landscape. It includes AI systems used in critical infrastructure, education, employment (such as CV screening), essential public services, and justice management. Companies deploying these solutions-such as our Talent Verify AI candidate screening system-must ensure that algorithms are transparent, traceable, and under constant human supervision.

Finally, limited risk systems, such as chatbots or content generators, have basic transparency obligations: the user must be aware they are interacting with a machine. Minimal risk systems, such as spam filters, carry no additional obligations under the Act beyond existing GDPR compliance.

Technical Obligations and Documentation for the CTO

From a technical perspective, the AI Act demands a transformation in how software systems are designed and maintained. For the CTO, this means AI can no longer be a "black box." Traceability becomes the core requirement.

Every high-risk AI system must maintain updated technical documentation describing the design, algorithmic logic, and training data used. It is mandatory to implement automatic logging capabilities to monitor the system's performance throughout its lifecycle. This is particularly relevant for sales automation solutions and RPA agents, where every decision must be explainable in the event of an audit.

The quality of training data is another fundamental pillar. The Act requires datasets to be relevant, representative, and, as far as possible, free of biases that could lead to illegal discrimination. This forces Spanish companies to establish much stricter data governance protocols than those currently in place, ensuring the lawfulness of information provenance and respect for intellectual property.

To facilitate this compliance, platforms like SINAPSIS are deployed sovereignly within the client's infrastructure. By not sending data to external servers outside European jurisdiction, risk management is significantly simplified, ensuring that log control and human supervision remain under the CTO’s direct authority.

The Role of the General Counsel in AI Governance

For the legal department, the AI Act represents a compliance challenge that overlaps with GDPR. The General Counsel must lead the creation of a Risk Management System to identify and mitigate potential AI impacts on fundamental rights.

This system is not a static document but an iterative process. Impact assessments must be performed before the system is put into service. Furthermore, the General Counsel must ensure that contracts with technology providers include clear clauses regarding liability, security updates, and access to information necessary to comply with Spanish supervisory authorities (such as AESIA).

Transparency is a legal obligation as well as a technical one. Users must be informed clearly and understandably about how the AI system works and what data it uses. In the case of AI voice agents, for example, it is imperative that the caller is informed from the start of the call about the agent's artificial nature to comply with the transparency principles of the Act.

Sanctions and the Cost of Non-Compliance

The sanctioning regime of the EU AI Act is one of the most severe in digital legislation to date. According to industry estimates, fines are designed to be both deterrent and proportional:

  1. Non-compliance with prohibited practices: Up to €35 million or 7% of turnover.
  2. Non-compliance with high-risk system obligations: Up to €15 million or 3% of turnover.
  3. Supplying incorrect information to authorities: Up to €7.5 million or 1.5% of turnover.

For Spanish SMEs, the Act provides certain fine caps to avoid stifling innovation, but civil liability and reputational damage can be equally devastating. Therefore, the "move fast and break things" strategy is no longer viable in the European regulated environment. "Safety by design" is the only path to avoid administrative sanctions and litigation arising from erroneous or biased automated decisions.

Sovereign Implementation and Privacy Strategies

Data sovereignty has become a strategic asset. Companies opting for public cloud AI solutions often struggle to certify where their data is processed and who ultimately has access to it. In this context, the trend in Spain is shifting toward Private AI.

Deploying large language models on proprietary servers or controlled private clouds allows for more efficient compliance with the Act. By using SINAPSIS, organizations can run advanced AI processes without sensitive information ever leaving their security perimeter. This drastically reduces the exposure surface and facilitates the creation of technical documentation required by European law, as traceability is total from data ingestion to model output.

Furthermore, using local or sovereign infrastructure allows for better integration with existing security systems, such as firewalls and intrusion detection systems, aligning with the cybersecurity requirements the AI Act imposes on high-risk systems.

Frequently Asked Questions

Which Spanish companies are exactly affected by this regulation? The AI Act affects any organization that uses or markets AI systems within the EU market, regardless of whether they are headquartered in Spain or abroad. This includes providers who develop the technology and deployers who use it for internal or business purposes. Virtually any company automating critical decisions via algorithms must evaluate its compliance level to avoid heavy sanctions in the short term.

What is the difference between an AI provider and an AI deployer? The provider is the entity that develops the AI system or markets it under its own brand. The deployer is the natural or legal person using said system under their own authority in their professional activity. Both have responsibilities: the provider must ensure technical conformity and CE marking, while the deployer must ensure it is used according to instructions, maintain human oversight, and perform impact assessments where required by law.

Is an external audit mandatory for high-risk systems? An external audit by a notified body is not always mandatory, except in specific cases like biometric identification. For most high-risk systems, the regulation allows for an internal conformity control conducted by the company itself, provided they strictly meet documentation, risk management, and data quality requirements. However, seeking an external technical audit is highly recommended to mitigate legal risks and demonstrate due diligence to authorities.

How does the AI Act affect the use of tools like ChatGPT in the workplace? General-purpose generative AI tools are subject to specific transparency rules. Companies must inform users when text, images, or audio have been artificially generated. If these tools are integrated into high-risk processes (such as recruitment), the company assumes the obligations of a high-risk system. For this reason, many companies are migrating toward private solutions like SINAPSIS to maintain total control over compliance and privacy.

What role does the Spanish Agency for AI Supervision (AESIA) play? AESIA is the national authority responsible for overseeing compliance with the AI Act in Spain. Its role includes market surveillance, system inspections, imposing sanctions, and advising Spanish companies during the adoption process. It is the first agency of its kind in Europe and will play a fundamental role in the practical interpretation of the law and the management of "sandboxes" or controlled testing environments for SMEs.

At HispanIA Data Solutions, we help Spanish businesses transition toward secure, sovereign AI that strictly complies with the European legal framework. If you wish to evaluate the impact of the AI Act on your organization or learn how SINAPSIS can protect your competitive advantage, do not hesitate to contact our technical and legal consultancy team.