Members of the European Parliament (MEPs) have adopted a negotiating position on the Artificial Intelligence (AI) Act ahead of talks with European Union (EU) member states on the final shape of the law to “ensure that AI developed and used in Europe is fully in line with EU rights and values,” according to a press release from the MEPs dated June 14, 2023.
“EU rights and values” include human oversight, safety, privacy, transparency, non-discrimination, and social and environmental wellbeing. The AI Act establishes obligations for providers and those deploying AI systems and would prohibit systems with an unacceptable level of risk. The MEPs expanded the list to include bans on intrusive and discriminatory uses of AI such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location, or past criminal behavior);
- Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
“Providers of foundation models – a new and fast-evolving development in the field of AI –- would have to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law) and register their models in the EU database before their release on the EU market,” according to the MEP press release.
In addition, “Generative AI systems” based on such models including AI chatbot ChatGPT “would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content,” the MEP press release explained.
ClearStar is a leading global Human Resources technology company specializing in background checks, drug testing, and occupational health screening. Issues involving the usage of AI by employers is one of the “2023 Top Trends in Workforce Screening” researched and compiled in a white paper by ClearStar. To learn more about ClearStar, please contact us.
© 2023 ClearStar. All rights reserved. – Making copies of or using any part of the ClearStar website for any purpose is prohibited unless written authorization is first obtained from ClearStar. ClearStar does not provide or offer legal services or legal advice of any kind or nature. Any information on this website is for educational purposes only.