{"id":10414,"date":"2026-04-28T13:39:45","date_gmt":"2026-04-28T12:39:45","modified":"2026-04-28T13:39:47","modified_gmt":"2026-04-28T12:39:47","slug":"what-do-employers-need-to-know-about-the-ai-act-rules-on-recruitment-performance-evaluation-and-ai-training","status":"publish","type":"post","link":"https:\/\/www.madarassy-legal.com\/en\/what-do-employers-need-to-know-about-the-ai-act-rules-on-recruitment-performance-evaluation-and-ai-training\/","title":{"rendered":"What Do Employers Need to Know About the AI Act? Rules on Recruitment, Performance Evaluation and AI Training"},"content":{"rendered":"If you are a business owner or HR manager and your team uses ChatGPT, Copilot or any other AI tool on a daily basis, the EU AI Act applies to you. The regulation imposes direct EU AI Act employer obligations (Hungarian: eu ai act munk\u00e1ltat\u00f3 k\u00f6telezetts\u00e9gei) in three key areas: recruitment, performance evaluation and mandatory AI training for employees. In this article, we provide a practical guide on how to meet these requirements on time and effectively.\n\n\n\nWhy Does the AI Act Affect Almost Every Hungarian Employer?\n\n\n\nWho qualifies as a &#8220;deployer&#8221; under the regulation?\n\n\n\nUnder Regulation (EU) 2024\/1689 \u2014 commonly known as the AI Act \u2014 any organisation that uses an AI system under its own authority in a professional context qualifies as a deployer. In practice, this means that if an employer allows the use of any AI-based tool, the AI Act HR compliance (Hungarian: AI Act HR k\u00f6telezetts\u00e9gek) requirements apply. In our firm&#8217;s experience, many businesses are unaware that everyday generative AI tools also fall within this scope.\n\n\n\nWhat are the key deadlines?\n\n\n\nThe regulation takes effect in stages. Since 2 February 2025, prohibited AI practices must be discontinued and AI literacy obligations are already in force. The penalty framework was activated on 2 August 2025, giving authorities the power to impose fines. Full compliance \u2014 including rules on high-risk systems \u2014 becomes mandatory on 2 August 2026, although the European Commission&#8217;s Digital Simplification Package may extend this deadline to the end of 2027.\n\n\n\nThe Risk Classification Logic \u2014 Which HR Activity Falls Where?\n\n\n\nUnacceptable, high, limited and minimal risk\n\n\n\nThe AI Act takes a risk-based approach, classifying AI systems into four categories. Minimal-risk systems (such as spam filters or recommendation engines) are subject to virtually no requirements. Limited-risk systems (such as chatbots) must meet transparency obligations: users must be informed they are interacting with AI.\n\n\n\nHigh-risk systems are subject to strict documentation, testing and oversight requirements. Unacceptable-risk systems are banned outright.\n\n\n\nWhy are recruitment and performance evaluation classified as &#8220;high risk&#8221;?\n\n\n\nAnnex III, point 4 of the AI Act explicitly lists AI systems used in employment and workforce management as high-risk. The reasoning is straightforward: these tools directly affect people&#8217;s livelihoods, careers and fundamental rights. Every employer should be aware that the entire spectrum \u2014 from recruitment screening tools to performance evaluation algorithms \u2014 may fall into this category.\n\n\n\nAI Act Recruitment Rules \u2014 What Is Allowed and What Is Not?\n\n\n\nJob advertising, screening and candidate assessment with AI\n\n\n\nUnder the regulation, any AI system used for targeted job advertising, analysing and filtering applications, or evaluating candidates qualifies as high-risk. This does not mean these tools are banned \u2014 but their use is subject to strict conditions. Employers must ensure high-quality training data, detailed documentation of the system&#8217;s operation and the implementation of risk management measures.\n\n\n\nHuman oversight and non-discrimination\n\n\n\nOne of the most important AI Act recruitment rules (Hungarian: AI Act toborz\u00e1s szab\u00e1lyai) is the requirement for human oversight. AI cannot make final hiring decisions on its own \u2014 a human must always have the final say. In addition, AI systems must be free from discrimination, supported by an EU conformity declaration. In our legal practice, we often see companies overlook the fact that GDPR data protection rules also apply in full to AI-based recruitment.\n\n\n\nAI Employee Performance Evaluation \u2014 New Rules for the Workplace\n\n\n\nPromotion, dismissal and changes to employment conditions using AI\n\n\n\nIt is not only recruitment that falls under the regulation. Decisions made during the course of employment are equally affected. If an AI system is involved in promotions, dismissals or changes to working conditions, this also constitutes a high-risk application. This classification triggers the same documentation, transparency and oversight obligations as recruitment systems. It is therefore essential that employers consider the rules on AI employee performance evaluation beyond just the hiring process.\n\n\n\nWhat is the difference between analysis and decision-making?\n\n\n\nNot every use of AI automatically qualifies as high-risk. If a system merely analyses the outcomes of prior human decisions or identifies patterns without influencing future decisions, it may not fall into the stricter category. However, if AI actively contributes to shaping an employment-related decision \u2014 for example, by ranking employees based on performance \u2014 it clearly constitutes a high-risk application. Drawing this line requires case-by-case assessment for each system.\n\n\n\nWorkplace Emotion Recognition \u2014 What the AI Act Categorically Prohibits\n\n\n\nWhich systems fall under the ban?\n\n\n\nThe AI Act classifies workplace emotion recognition AI systems as unacceptable risk. This includes tools that infer emotional states, satisfaction levels or mental health from employees&#8217; biometric data \u2014 facial expressions, voice tone, posture or gestures. This ban has been in force since 2 February 2025 and already applies to every Hungarian employer. Our firm considers it particularly important to highlight this, as some performance evaluation software may contain hidden emotion recognition features.\n\n\n\n\n\n\n\nFines: up to EUR 35 million\n\n\n\nThe regulation imposes the most severe penalties for using unacceptable-risk AI practices: up to EUR 35 million, or 7% of the company&#8217;s annual global turnover \u2014 whichever is higher. The Hungarian implementing decree (Government Decree 344\/2025) sets the upper limit in forints at HUF 13.3 billion (approximately EUR 34 million). The AI market surveillance authority \u2014 the Ministry of National Economy \u2014 has been authorised to apply these sanctions since August 2025.\n\n\n\nMandatory AI Training Employer Obligations \u2014 AI Literacy in Practice\n\n\n\nWhat exactly does Article 4 of the AI Act require?\n\n\n\nArticle 4 of the AI Act requires organisations that deploy AI systems to take measures to ensure their employees have an adequate level of AI literacy. This obligation has been applicable since 2 February 2025 \u2014 in other words, it is not a future task but a current requirement. The regulation does not prescribe a specific format: training may be delivered through e-learning, classroom sessions, internal policies or a combination of these. The key point is that employees must understand how the AI systems they use work, their capabilities and their risks.\n\n\n\nHow can employers make training mandatory?\n\n\n\nUnder the Hungarian Labour Code (Mt.), employers have the right to issue instructions, which means they can unilaterally require employees to complete AI training if it is relevant to their role. In this case, training time counts as working hours, wages are payable and the employer bears the full cost. A study contract may also be used as an alternative, offering an incentive-based solution for both parties. From the mandatory AI training employer (Hungarian: MI k\u00e9pz\u00e9s k\u00f6telez\u0151 munk\u00e1ltat\u00f3) perspective, the legal tools are in place \u2014 the question is whether companies act in time.\n\n\n\nDocumentation and verifiability\n\n\n\nIn the event of a regulatory inspection, the employer must be able to demonstrate that it has taken all necessary steps to ensure employee AI literacy. This is why proper documentation is the key to complying with AI use workplace regulations: training plans, attendance records, test results and proof of internal policy distribution. A policy document alone is not enough \u2014 AI literacy must become part of everyday work practice. In our experience, the most successful organisations treat AI training not as a one-off obligation but as an ongoing development programme.\n\n\n\n\n\n\n\nAI Use Workplace Regulations \u2014 Applying the GDPR and the AI Act Together\n\n\n\nWhy is it not enough to comply with the AI Act alone?\n\n\n\nWhen using AI in relation to employees, the GDPR and the data protection provisions of the Hungarian Labour Code must also be fully observed alongside the AI Act. Employers may only request personal data from employees that is relevant to the establishment, performance or termination of the employment relationship. EU AI Act employer obligations therefore do not operate in isolation \u2014 the legal framework is complex, and compliance can only be assessed as a whole.\n\n\n\nShadow AI \u2014 the invisible risk\n\n\n\nOne of the greatest practical challenges is the phenomenon of shadow AI, where employees use AI tools without the employer&#8217;s knowledge or approval. In such cases, not only may AI Act compliance obligations be breached, but trade secrets and personal data may also escape the organisation&#8217;s control. The most effective preventive measure is to establish an internal AI policy that clearly defines which tools may be used and under what conditions.\n\n\n\nHow Should Employers Prepare to Meet EU AI Act Employer Obligations by 2026?\n\n\n\nPractical steps to take now\n\n\n\nEU AI Act employer obligations cannot wait until August 2026 \u2014 preparation must begin now. The first step is to conduct an AI inventory: identify which AI systems are used within the organisation and which risk category they fall into. This should be followed by developing an internal policy, preparing a training plan and building a documentation system. Those who start preparing now will not only ensure regulatory compliance but also gain a competitive advantage in the market.\n\n\n\nWhy is it worth seeking legal advice?\n\n\n\nThe combined application of the AI Act, the GDPR, the Hungarian Labour Code and Act LXXV of 2025 (Hungarian AI Act Implementation Law) requires complex legal analysis. The risk classification of individual AI systems, the required content of documentation and the precise scope of training obligations are all questions where template solutions are not sufficient. The severity of regulatory fines alone \u2014 potentially reaching billions of forints \u2014 justifies making the preparation a structured, professionally guided process rather than a last-minute exercise.\n\n\n\nWe can help you prepare for AI Act compliance.\n\n\n\nMadarassy Law Firm, with over 20 years of experience, stands ready to support clients in the fields of AI regulation, data protection and employment law. Whether you need an AI inventory, an internal policy or help meeting your EU AI Act employer obligations \u2014 get in touch with us at&nbsp;www.madarassy-legal.com.","protected":false},"excerpt":{"rendered":"If you are a business owner or HR manager and your team uses ChatGPT, Copilot or any other AI tool on a daily basis, the EU AI Act applies to you. The regulation imposes direct EU AI Act employer obligations (Hungarian: eu ai act munk\u00e1ltat\u00f3 k\u00f6telezetts\u00e9gei) in three key areas: recruitment, performance evaluation and mandatory [&hellip;]","protected":false},"author":2,"featured_media":10428,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rop_custom_images_group":[],"rop_custom_messages_group":[],"rop_publish_now":"initial","rop_publish_now_accounts":[],"rop_publish_now_history":[],"rop_publish_now_status":"pending","footnotes":""},"categories":[291,63],"tags":[],"class_list":["post-10414","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-corporate","category-data-protection"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/posts\/10414","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/comments?post=10414"}],"version-history":[{"count":4,"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/posts\/10414\/revisions"}],"predecessor-version":[{"id":10436,"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/posts\/10414\/revisions\/10436"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/media\/10428"}],"wp:attachment":[{"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/media?parent=10414"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/categories?post=10414"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.madarassy-legal.com\/en\/wp-json\/wp\/v2\/tags?post=10414"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}