JANUARY 2024 SCREENING COMPLIANCE UPDATE
ClearStar is happy to share the below industry related articles written by subject matter experts and published on the internet in order to assist you in establishing and keeping a compliant background screening program. To subscribe to the Screening Compliance Update or to view past updates, please visit www.clearstar.net/category/screening-compliance-update/.
FEDERAL DEVELOPMENTS
CFPB issues two new FCRA advisory opinions on background screening reports and disclosure of credit files to consumers On January 11, 2024, the Consumer Financial Protection Bureau (CFPB) issued two new advisory opinions: Fair Credit Reporting; Background Screening and Fair Credit Reporting; File Disclosure. The advisory opinions are part of the CFPB’s ongoing efforts to clean up what the CFPB describes in its press release as allegedly “sloppy” credit reporting practices and ensure credit report accuracy and transparency to consumers. As a reminder, advisory opinions are interpretive rules that provide guidance on the CFPB’s interpretation of a particular law or regulation. The Biden Administration kicked off 2023 by issuing the “Blueprint for a Renter Bill of Rights” and directing the CFPB and Federal Trade Commission (FTC) to take actions in furtherance of those principles. In February, the CFPB and FTC issued a request for information on background screening in connection with rental housing. In July, the FTC issued a blog post reminding landlords, property managers, and other housing providers of their obligation under the Fair Credit Reporting Act to provide notice of adverse action when information in a consumer report leads them to deny housing to an applicant or require the applicant to pay a deposit that other applicants would not be required to pay. In November, the CFPB released two reports concerning tenant background checks. “Consumer Snapshot: Tenant Background Checks” discusses consumer complaints received by the CFPB that relate to tenant screening by landlords and “Tenant Background Checks Market” looks at the practices of the tenant screening industry. The CFPB has previously addressed the issues of accurate credit reporting and investigating credit report disputes in its supervisory highlights. Background Checks In the first advisory opinion, the CFPB addresses the provision of background check reports. Background checks are used by landlords and employers to make rental and employment determinations, respectively. Background check reports prepared by employment and tenant screening companies often contain information compiled from several sources about a consumer’s credit history, rental history, employment, salary, professional licenses, criminal arrests and convictions, and driving records. The CFPB advisory says prior research has determined that background check reports often contain false or misleading information that may adversely affect an individual’s housing or employment. In 2021, the CFPB issued an advisory opinion that it was unreasonable for consumer reporting agencies (CRAs) to use name-only matching (matching records to a consumer by first and last name without any other identifying information). The current advisory opinion highlights that CRAs, covered by the Fair Credit Reporting Act (FCRA), must “follow reasonable procedures to assure maximum possible accuracy” under Section 607(b). Specifically, the CRA’s procedures should:- Prevent the reporting of public record information that has been expunged, sealed, or otherwise legally restricted from public access;
- Ensure disposition information is reported for any arrests, criminal charges, eviction proceedings, or other court filings that are included in background check reports; and
- Prevent the reporting of duplicative information.
- At least one free file disclosure annually and in connection with adverse action notices and fraud alerts;
- Consumer’s complete file with clear and accurate information that is presented in (i) a way an average person could understand and (ii) a format that will assist consumers in identifying inaccuracies, exercising their rights to dispute any incomplete or inaccurate information, and understanding when they are being impacted by adverse information; and
- All sources for the information contained in consumers’ files, including boththe originating sources and any intermediary or vendor sources, so consumers can identify the source and correct any misinformation (noting that only providing summarized information would not be compliant).
- “Eliminating Barriers in Recruitment and Hiring”(including use of artificial intelligence for hiring, apprenticeship/internship programs, online-focused application processes, screening tools for hiring—such as pre-employment tests and background checks, and the underrepresentation of women and workers of color in industries such as manufacturing, tech, STEM, and finance, for example);
- “Protecting Vulnerable Workers and Persons from Underserved Communities from Employment Discrimination”(including immigrant workers, persons with mental or developmental disabilities, temporary workers, older workers, and workers traditionally employed in low-wage jobs);
- “Addressing Selected Emerging and Developing Issues”(including the use of qualification standards or other policies that negatively affect disabled workers, protecting workers affected by pregnancy, childbirth or related medical conditions, preventing discriminatory bias towards religious minorities or LGBTQIA+ individuals, and the use of artificial intelligence or automated recruitment tools for hiring);
- “Advancing Equal Pay for All Workers” (including a focus on employer policies that prevent or attempt to limit workers from asking about pay, inquiring about applicants’ prior salary histories, or prohibiting workers from sharing their compensation with coworkers);
- “Preserving Access to the Legal System”(including the use of overly broad releases or nondisclosure agreements, the implementation of unlawful mandatory arbitration provisions, and any failure to keep records required by statute or EEOC regulations); and
- “Preventing and Remedying Systemic Harassment.”
STATE, CITY, COUNTY AND MUNICIPAL DEVELOPMENTS
New Pennsylvania Legislation and Philadelphia Ordinance Amendment Tackle Pardoned Convictions, Expunged Records, and Negligent Hiring Liability Pennsylvania and Philadelphia recently enacted changes that impact employer criminal background screening. State Law Enacted on December 14, 2023, and effective February 12, 2024, Pennsylvania’s House Bill No. 689 amends Pennsylvania law relating to the expungement of certain criminal record information and employer immunity when hiring individuals with expunged records. First, the legislation immunizes employers from liability for any claim related to the effects of expunged records or the lawful use of criminal record history information when an applicant voluntarily discloses an expunged conviction. This helps clarify a potential ambiguity under existing state law regarding whether an employer still might face negligent hiring liability for hiring an individual with an expunged criminal record where the individual goes on to commit some misconduct, such as injuring a third party. Previously, a negligent hiring lawsuit might contend that if the employer learned about an expunged criminal record by means other than a formal background check or official court records, the employer should have used the record to disqualify the person, and by not doing so, was negligent. Such a stance would seemingly be contrary to the purpose behind criminal record expungement, and the new legislation seems intended to prevent such an incongruous argument from surviving dismissal. Second, the law extends the availability of automatic expungements to pardons. The law requires that the Pennsylvania Board of Pardons, which administers pardons, to notify the Administrative Office of Pennsylvania Courts (AOPC) on a quarterly basis of any pardons, and then requires the AOPC to notify the relevant Court of Common Pleas to order the record expunged. Under the law as amended, criminal history record information that has been expunged or granted limited access cannot be used by private entities for employment, housing, or school matriculation purposes, unless required by federal law. If the law works as intended, employers should simply not see the pardoned cases because they are supposed to be unavailable to the public. However, given the number of required steps in the process and different entities involved, it is not inconceivable that a candidate may believe an offense has been expunged, when it fact it remains available in the public record. Moreover, several pieces of this process remain unclear, such as how quickly the AOPC will act upon receipt of information from the Board of Pardons, whether individuals will be notified that their pardoned convictions were expunged, and whether the court docket will be changed to reflect a pardon status while expungement is in process. Third, the law expands eligibility for Pennsylvania’s pre-existing limited access status for criminal records. Now, certain individuals who are free from conviction for seven years and otherwise meet requirements can petition for limited access; previously, the minimum threshold was 10 years. The law also clarifies categories of offenses that are and are not eligible for limited access petitions. Notably, this statewide legislation does not amend the existing requirements on an employer’s general use of criminal record history.1 Under existing law, Pennsylvania employers generally are required to use only job-related misdemeanor and felony convictions in making hiring decisions. Philadelphia Ordinance Philadelphia employers are subject to additional restrictions and procedural requirements under Philadelphia’s Fair Criminal Record Screening Standards Ordinance. For its part, Philadelphia weighed in by enacting an amendment to that ordinance specifically addressing employer use of convictions subject to “exoneration.” The city ordinance, effective January 19, 2024, defines “exoneration” as reversing or vacating a conviction by pardon, acquittal, dismissal or other post-conviction re-examination of the case by the court or other government official, and generally prohibits employers from denying employment based on convictions subject to “exoneration” as so defined. To prepare for the January and February effective dates of the laws, employers in Pennsylvania may want to ensure that they have considered how to handle situations in which a candidate identifies that an offense has been expunged, pardoned, granted limited access or subject to other post-conviction relief before potentially denying employment based on such a record. Click Here for the Original Article Reminders About California’s Fair Chance Act California’s Fair Chance Act also known as the “Ban the Box” law took effect in January 2018. It generally prohibits employers with five or more employees from asking about your conviction history before making you a job offer. In 2021, California’s Civil Rights Department (formerly the Department of Fair Employment and Housing) announced new efforts to identify and correct violations of the Fair Chance Act. Since then, the Civil Rights Department has stepped up enforcement of the statute. As such, it is vital for covered employers to understand the requirements under the law. Covered Employers Public and private employers with five or more employees are covered by the law. This includes union hiring halls, labor contractors, temporary employment agencies, and client employers. Requesting Background Checks Covered employers may not ask applicants about their criminal history until after a conditional offer is extended. However, even after a conditional offer, employers may not ask about or consider information about the following:- An arrest that did not result in a conviction.
- Referral to or participation in a pretrial or posttrial diversion program.
- Convictions that have been sealed, dismissed, expunged, or statutorily eradicated.
- Conduct an individualized assessment.
- Provide notification in writing that the applicant’s criminal history disqualifies the applicant from the position. The notice must also provide the conviction(s) that disqualify the applicant.
- Provide a copy of the conviction history report to the applicant.
- Provide the applicant 5 business days to respond to the preliminary decision to rescind.
- Consider any response from the applicant.
- Provide final notice in writing about disqualification.
COURT CASES
Magistrate Judge Recommends No FCRA Liability for Accurately Reporting a Publicly Available Conviction that was Expunged A magistrate judge in the Northern District of Georgia recently recommended granting summary judgment in a Fair Credit Reporting Act (FCRA) case in favor of a background reporting company on the grounds that a report given only to the consumer is not a consumer report and including a valid conviction on a report does not violate the FCRA as long as its expungement is also included. In Peeples v. National Data Research, Inc. (NDR), the plaintiff applied to a pre-med program at a school that required a background report from NDR as part of its application requirements. NDR’s report on the plaintiff listed a 2003 criminal conviction for “giving false information.” The 2003 conviction, however, was expunged by court order in December 2019. The report from NDR did not contain any information about the expungement and NDR did not have a policy to investigate whether court records have been expunged absent a consumer dispute. NDR provided the background report to the plaintiff only. The plaintiff then requested removal of the conviction due to the expungement. NDR confirmed the plaintiff was convicted for giving false information and the judgment was expunged. It updated the report to include the expungement but did not remove the underlying conviction. The plaintiff sued NDR for violation of two sections of the FCRA: § 1681e(b) for failing to follow reasonable procedures to assure maximum possible accuracy in the report; and § 1681i for failing to perform a reasonable reinvestigation and correct or suppress the expunged record and maintain reasonable procedures to prevent that inaccurate information from reappearing. In the report and recommendation, the magistrate judge began the analysis by noting that the Eleventh Circuit draws a distinction between a consumer report and a credit file. A consumer report requires communication of the information to a third party. Section 1681e(b) relates solely to information in a consumer report. Because NDR provided the report only to the plaintiff, it was not a consumer report. NDR’s knowledge the plaintiff would be submitting the report to her school did not make it a consumer report because NDR was not the one providing it to a third party. As a result, the magistrate judge recommended the district court grant NDR summary judgment on the § 1681e(b) claim. When considering the plaintiff’s claim under § 1681i, the court accepted that the report was inaccurate/incomplete when it did not include the expungement information, that the plaintiff notified NDR, and that her dispute was not frivolous, focusing only on whether NDR conducted a reasonable reinvestigation of the disputed item — the conviction — and whether failure to remove it caused the plaintiff harm. The magistrate judge noted that federal courts have generally held that including a valid conviction on a background report does not violate the FCRA, even if that conviction was later set aside, dismissed, or expunged. “For purposes of FCRA reporting, the historical fact of [the plaintiff’s] conviction was not altered by the expungement order, and the FCRA expressly permits consumer reporting agencies to report ‘records of convictions of crimes.’ 15 U.S.C. § 1681c(a).” Federal law, not South Carolina law, dictates the conviction is still a conviction. Ultimately, while the original report of conviction was incomplete, once NDR modified and corrected the report to include the expungement, nothing more was required and summary judgment was recommended in NDR’s favor on this claim. Click Here for the Original Article CJEU Rules on Processing of Sensitive Data and Compensation Under the GDPR On December 21, 2023, the Court of Justice of the European Union (“CJEU”) issued its judgment in the case of Krankenversicherung Nordrhein (C-667/21) in which it clarified, among other things, the rules for processing special categories of personal data (hereafter “sensitive personal data”) under Article 9 of the EU General Data Protection Regulation (“GDPR”) and the nature of the compensation owed for damages under Article 82 of the GDPR. Background The case related to the processing of an incapacitated employee’s personal data, including health data, by the medical service provider (“MDK”) of a health insurance fund in Germany. Under applicable law, the MDK draws up reports on the capacity of individuals insured by the health insurance fund to work. These may include reports concerning the health of MDK’s own employees. After becoming aware of the fact that a report concerning himself had been prepared, an employee of MDK sought compensation under Article 82 of the GDPR. The CJEU’s Ruling In its judgment, the CJEU ruled that in order to process sensitive personal data under the GDPR, there must exist both a legal basis under Article 6 of the GDPR and an applicable exception under Article 9 of the GDPR. The CJEU also held that the rules and limitations on the processing of sensitive personal data under Article 9.2(h) (which allows processing of sensitive personal data where necessary for the purposes of preventive or occupational medicine, for the assessment of the working capacity of the employee, medical diagnosis, the provision of health or social care or treatment or the management of health or social care systems and services) and Article 9.3 of the GDPR (which provides that processing based on Article 9.2 (h) of the GDPR must be conducted by or under the responsibility of a professional subject to the obligation of professional secrecy) are also applicable to a situation in which a medical service provider processes health data of its employees in its capacity as medical service provider to assess their working capacity. In other words, the medical service provider could rely on Article 9.2 (h) of the GDPR to process its employees’ health data. The CJEU also clarified that Article 9.3 of the GDPR does not, by itself, require the controller to establish specific restrictions regarding the ability of work colleagues to access the health data of an employee. On the interpretation of Article 82 of the GDPR, the CJEU held that the GDPR establishes a system of fault-based liability in which the controller’s fault is presumed, unless it is capable of proving that it is not in any way responsible for the event giving rise to the damage. On the nature of the compensation owed to the data subject under Article 82 of the GDPR, the CJEU clarified that it is purely compensatory, and not punitive in nature. Read the judgement. Click Here for the Original Article Can you revoke an employment offer if the candidate fails a drug test? The Human Rights Tribunal of Alberta recently determined that an employer did not discriminate against a job candidate by revoking an offer of employment after the job candidate failed a pre-employment drug test. The candidate was offered a job as Business Continuity and Emergency Management Advisor. The position was classified as safety-sensitive, and the offer of employment specifically required the candidate to take a pre-employment drug test. The candidate testified that:- he understood the job offer was conditional and that he would have to undergo a pre-employment drug test; and
- the drug testing company informed him that cannabis was one of the substances that he was being tested for.
Takeaway for Employers
This is a welcome decision for employers with safety-sensitive positions. It supports placing a positive onus on employees to appropriately disclose disability-related issues to their employer. Each case needs to be reviewed based on its own facts. There may be times when it is in an employee’s best interest to voluntarily disclose a disability, and there may be others where indicators exist that trigger an employer’s duty to inquire. Employers are well-advised to follow the lead of the employer in this case and ensure that offers of employment clearly articulate the conditions and expectations related to pre-employment drug and alcohol testing. We recommend that employers seek legal advice prior to implementing a drug or alcohol testing policy and prior to taking any disciplinary action against an employee who fails a drug or alcohol test. Click Here for the Original ArticleINTERNATIONAL DEVELOPMENTS
UK-US data bridge: ICO publishes updated TRA guidance Following the implementation of the UK-US Data Bridge in October 2023, the ICO has updated its Transfer Risk Assessment guidance with a specific section on TRAs relating to transfers to the United States. The updated guidance makes it clear that can parties rely on the analysis published by Department of Science, Innovation and Technology in relation to the Data Bridge when making data transfers on the basis of an alternative mechanism. Data Bridge The UK-US ‘Data Bridge’ took effect on 12 October 2023. It is an extension of the EU-US Data Privacy Framework, approved by the European Commission as adequate in respect of transfers from the EU to the US. As with previous, similar transatlantic arrangements, it can only be relied upon in respect of transfers to recipients who are certified under the scheme. As we highlighted in our analysis of the Data Bridge, this means that a TRA is still required for transfers to the US based on other transfer mechanisms. However, the DSIT analysis is still relevant in these circumstances. The ICO concludes that “it is reasonable and proportionate for you to rely on the DSIT analysis in your TRA, regardless of whether the personal information you are transferring is categorised as low, medium or high harm risk.” ICO guidance on relying on the DSIT analysis The ICO guidance states that a broad section of the DSIT analysis was directed at the application of relevant of US laws and practices more generally. It considered US respect for the rule of law and fundamental rights and freedoms, the existence of an effective and independent supervisory authority, and its relevant international commitments. The framework for public authorities to access personal data following transfer to the US was considered to be satisfactory and underpinned by appropriate safeguards and redress. To that end, organisations are encouraged to simply incorporate the DSIT analysis into their TRAs by reference, documenting that:- the DSIT analysis concludes that US laws and practices provide adequate protections for people whose personal data is transferred to the US;
- it is reasonable and proportionate to rely on the DSIT analysis because the scope of assessment is as required under Article 45 UK GDPR; and
- any published updates will be kept under review.
- Lays down rules on B2B and B2C data access.Manufacturers and providers are obliged to design products and services in such a manner that generated data are directly accessible to users, and to provide information to users on generated data, its accessibility, and users’ rights. At the users’ requests, data holders are required to make the data available to users or to third parties without undue delay, free of charge and, where applicable, continuously and in real time. These obligations apply to connected products and related services placed on the EU market, irrespective of the place of establishment of the manufacturers and providers.
- Establishes a ban on unfair contractual terms on data sharing and introduces non-binding model contractual terms.
- Provides for a harmonized framework for the access and use of data held by the private sector, by public sector bodies, the Commission, the European Central Bank, and EU bodies.
- Introduces restrictions to non-EU governmental access and international transfers of non-personal data,by requiring providers of data processing services to take technical, organizational and legal measures to prevent unlawful access and transfers.
- Introduces requirements to enable switching between providers of cloud services and of other data processing services, by requiring providers to take all reasonable measures to facilitate the process of achieving functional equivalence in the use of the new service. Costs arising from the switching process can only be charged to the customers until 12 January 2027.
- Introduces interoperability requirements for participants in data spaces that offer data or data services, data processing service providers, and vendors of applications using smart contracts.
- Includes an obligation for EU Member States to lay down rules on penalties for infringements of the Data Act, and EU supervisory authorities may impose administrative finesas provided in the EU GDPR for certain infringements of the Data Act.
Background to the Schufa ADM judgment
The case emerged under the following circumstances. A loan application was refused based on the data subject’s Schufa score, used as a core component in the loan decision. Following a data subject access request (Article 15 GDPR), Schufa provided the data subject with the score and explained, in high level terms, the methods used to calculate that score. However, Schufa cited trade secrecy as justification for not providing information related to the weighting that lay behind the scoring system. The data subject then took a complaint to the Data Protection and Freedom of Information Commissioner for the Federal State of Hesse, Germany (the HBDI). The HBDI found that there was no established case that Schufa’s credit score processing was non-compliant with Section 31 of the German Federal Data Protection Act (Bundesdatenschutzgesetz, BDSG) which governs the requirements of calculating scores. It also confirmed that Schufa does not need to share the details or mathematical formula related to how information about an individual is weighted to create the individual’s Schufa score. The data subject appealed the HBDI’s decision to the Administrative Court, Wiesbaden, Germany. That court then made a reference to the CJEU.Reference to the CJEU
The key question of the reference was as follows: “whether Article 22(1) of the GDPR must be interpreted as meaning that the automated establishment, by a credit information agency, of a probability value based on personal data relating to a person and concerning his or her ability to meet payment commitments in the future constitutes ‘automated individual decision-making’ within the meaning of that provision, where a third party, to which that probability value is transmitted, draws strongly on that probability value to establish, implement or terminate a contractual relationship with that person.” The pivotal issue in the case focused on Article 22 GDPR and whether the Schufa score constituted a decision solely based on automated decision making and whether that decision produced legal effects concerning the data subject or it similarly significantly affects them and thus whether Schufa should have shared more details on the logic behind the decision. Under Article 22 GDPR, the data subject has the right not be subject to such ADM. There are exceptions under Article 22(2), specifically: (a) the decision is necessary for the performance of contract between the data subject and a data controller; (b) the decision is authorised by Union or Member State law to which the controller is subject and there are safeguards to protect the data subject; and (c) the decision is based on the data subject’s explicit consent. If relying on Article 22(2)(a) or (c) controllers will have to offer human intervention and a way for the data subject to express a view or contest the decision.The key elements of the ruling
The CJEU found:- The broad scope of the concept of ‘decision’ is confirmed by Recital 71 GDPR. That concept is broad enough to encompass calculating a credit score based on a probability value.
- A credit score based on probability value in this context affects, at the very least, the data subject significantly.
- By calculating a credit score, a credit reference agency makes an automated decision within the terms of Article 22 GDPR when a third party draws strongly on the probability value or score to establish, implement or terminate a contractual relationship with the data subject. The CJEU noted a risk of circumventing Article 22 of the GDPR and a lacuna in protections if a narrow approach was taken and the Schufa score was only regarded as preparatory.
- That Article 22(1) GDPR provides for a prohibition in principle, i.e. the infringement doesn’t need to be invoked individually by a data subject making a request.
- The question of whether Section 31 BDSG constitutes a legal basis in conformity with EU law will now have to be examined by the referring Administrative Court of Wiesbaden.
Wider implications
The judgment is part of a trend towards broad and data subject weighted interpretations of the GDPR by the CJEU, turning it into consumer rather than privacy law. The CJEU’s judgment on Article 22 GDPR was set out in broad terms and can be applied to other situations related to scoring systems. The judgment refers to a contractual relationship with the data subject in general terms rather than just the specifics of loan agreement process, indicating the potential breadth of relevance. The judgment could therefore have implications for a range of scoring processes and decision making; for example, credit reference agencies also provide services for employment checks and anti-money laundering services (AML) can also offer digital services based on scores. Although the judgment has broad implications, it does not follow that a large number of automated scoring systems are immediately caught by Article 22 GDPR, or are immediately unlawful. It will depend on how the score relates to the final decision made and what role it plays as a factor under consideration. The controller may also have a legal basis to carry out that ADM and safeguards in place as specified under Article 22 GDPR. There is now a significant expectation that data protection authorities need to provide guidance on what “draws strongly” means in practice.What comes next from data protection authorities?
A number of German Data Protection Authorities (DPAs) have issued statements following the judgment. The Hamburg DPA issued a statement, hailing the “ground-breaking importance for AI-based decisions” and also noting that it has “has consequences far beyond the scope of credit agencies – because it is transferable to the use of many AI systems”. The DPA also gave an example of AI analysing which patients are particularly suitable for a medical study, as an illustration of where the judgment may also make a difference. The Data Protection Authority for Lower Saxony has also issued a statement and indicated: “The CJEU’s view on the interpretation of the term ‘automated decision’ could also have further implications, for example on systems that prepare decisions with the help of algorithms or artificial intelligence or make them ‘almost alone’, so to speak“. Given the hype around, and importance of, AI, particularly generative AI, we can therefore expect further decisions and guidance from DPAs in 2024, setting out how the judgment will apply in a range of scenarios. The European Data Protection Board (EDPB) may also need to update its guidelines on automated decision making and profiling. The guidelines were adopted in 2018 and will soon be six years old. While the Schufa CJEU judgment does not contradict the EDPB’s guidance, for example the EDPB had already found that Article 22 operated as prohibition in principle, the EDPB may want to expand on the wider implications and explain in more detail what the judgment means in practice, including the concept of “draws strongly” in decision making.Controllers will need to work through the following steps:
- Identify any processes and systems using Schufa scores or any other scores or probability values, including when deploying AI systems.
- Assess whether these systems, including when contracted to third parties, make any decisions that “draw strongly” from the score and would thus now be caught by Article 22(1 If so, can adjustments be made to ensure that reliance on the score in the final decision falls below the “draws strongly” test?If the answer remains that Article 22(1) applies, the controller will need to find a specific legal basis under Article 22(2) to carry out the ADM and apply Article 22(3) safeguards. If the controller is using special category data, the further conditions of Article 22(4) apply. It may currently be the case that certain organisations do not have a contract in place and have not obtained explicit consent as a relevant legal basis. Some EU Member States could seek to provide new national legislation as a legal basis alongside appropriate safeguards.
- If Article 22(1) applies, the controller will need to meet the transparency requirements of Articles 13 or 14 and 15 GDPR. The controller may need to update existing privacy notices and information pop-up windows. The information must include, at least, meaningful information about the logic involved, as well as the significance and the envisaged consequences of the processing for the data subject. The guidance provided by the UK data protection regulator, the Information Commissioner’s Office (ICO), on explainabilty and AI can likely be helpful in this situation.
- Contracts may also need to be re-examined in a number of different contexts – both the contract between the data subject and the organisation making the final decision, and the contract between the two organisations.
- Following the judgment, the controller may need to review its data protections impact assessments and other documentation such as legitimate interest assessments and records of processing activities.
Litigation risk
Lastly, in light of the EU Representative Actions Directive (2018/1828) and the increasing trend for data litigation, there is a risk that compensation claims may be launched not just against Schufa but also against organisations using the Schufa score. Allen & Overy’s blog from May 2023 assesses the recent CJEU jurisprudence on compensation, including the finding that mere infringement of the GDPR does not confer a right to compensation. Allen & Overy’s blog on the collective redress action for consumers in Germany from March 2023 considers the incoming implementation of the EU Representative actions under the Directive (EU) 2020/1828. A key question will be whether a de minimis threshold of equal damage across a class action claim is likely in Article 22 GDPR cases.Looking ahead
As we look ahead to 2024 we can expect the Schufa judgment to play an important role in how AI is used in automated decision making and where the boundary falls between automated and partially automated decisions. Companies should look out for new guidance and enforcement decisions from DPAs in the year ahead. Click Here for the Original Article Transfers of personal data outside the European Union: the French Data Protection Authority (CNIL) publishes a draft practical guide to carry out a Transfer Impact Assessment The draft guide is published in the context of a public consultation. Organisations have 1 month to submit their observations to the CNIL. This article walks you through the context in which CNIL publishes this guide, its content and the keys takeaways. Why the CNIL publishes this practical guide? The General Data Protection Regulation (“GDPR”) aims at ensuring an equivalent level of protection to personal data within the European Union (“EU”) by imposing a regulatory framework which applies to all processing carried out within the EU or relating to individuals residing in the EU. Some companies may transfer personal data outside the EU as part of their activities, for example by using service providers located in third countries, by using cloud services, or by communicating personal data to a parent company or subsidiaries. This raises the question of the protection of personal data transferred outside the EU, to countries that do not have the same legislation as the EU. Under the GDPR, personal data must be offered the same level of protection afforded by the GDPR within the EU. This is the case, for example, when personal data is transferred to a country benefiting from an adequacy decision, i.e. a country recognised by the European Commission as offering an adequate level of protection that does not require the implementation of additional measures. In the absence of an adequacy decision, the data exporter, whether acting as a controller or a processor, must implement measures to compensate for the lack of data protection in the third country, receiving personal data, by providing appropriate safeguards (Binding Corporate Rules (BCR), Standard Contractual Clauses (SCCs), etc.). In its “Schrems II” judgment of 16 July 2020, the Court of Justice of the European Union (CJEU) ruled that standard contractual clauses were insufficient to ensure an effective protection of personal data, as they do not bind third countries due to their contractual nature. As a consequence, the CJEU ruled that the data exporter must (i) verify whether the legislation of the third country receiving the personal data offers a level of protection that is essentially equivalent to that guaranteed in the EU and (ii) determine the appropriate additional measures where necessary, in addition to implementing the appropriate safeguards. In order to fulfil this obligation, and where the transfer of personal data is based on a transfer tool listed under Article 46 of the GDPR, the data exporter, in collaboration with the data importer, must carry out a data transfer impact assessment (also referred to as a “TIA”). The European Data Protection Board (EDPB) has already published, in June 2021, its recommendations on measures to supplement transfer tools to ensure compliance with the EU level of personal data protection in which the EDPB details the different steps to be followed by the data exporter when carrying out a TIA and provides information on the supplementary measures that can be implemented and their effectiveness. Up until now, organisations have essentially relied on these recommendations and on the recommendations 02/2020 on essential European safeguards for surveillance measures to carry out TIAs. It is in this context that the CNIL decided to draft its own practical guide to, in its own words, “help data exporters carry out their TIAs“. At this stage, the CNIL is publishing a draft guide for public consultation until February 12, 2024. Publication of the definitive guide is scheduled for 2024. What this guide contains? This guide should be used as a methodology available for data exporters and enabling them to carry out a TIA. It should be noted that the CNIL has very much relied on the EDPB recommendations when elaborating this guide. Nevertheless, this guide is intended to be more practical than the EDPB recommendations, since it includes a TIA template that can be used as is by data exporters. This TIA template takes indeed the form of a table to be completed, including boxes to be ticked, which includes and reorganises the different steps and elements mentioned by the EDPB in its recommendations. The guide includes a first part dedicated to the questions to be asked in order to determine whether a TIA is necessary:- Is the data in question personal data?
- Is there a transfer of personal data?
- What is the qualification of the actors implicated?
- Does the transfer comply with all the principles of the GDPR and, in particular, can you minimise the amount of personal data transferred or transfer anonymised data rather than personal data?
- Can your data be transferred to a country that has been recognised by the European Commission as offering an adequate level of protection?
- Know your transfer
- Document the transfer tool used
- Evaluate the legislation and practices in the country of destination of the data and the effectiveness of the transfer tool
- Identify and adopt supplementary measures
- Implement the supplementary measures and the necessary procedural steps
- Re-evaluate at appropriate interval the level of data protection and monitor potential developments that may affect it.
- This guide does not constitute or contain, and is not intended to contain, an assessment of the legislation and practices of third countries. The CNIL therefore does not take a position on the level of personal data protection afforded by countries outside the EU, leaving it up to organisations to assess the legislation and practices of third countries.
- The template includes a section dedicated to the transfer tools used, which corresponds to step 2 of the TIA, also listed by the EDPB. As provided by both the EDPB and the CNIL, a TIA is not required when the recipient country benefits from an adequacy decision. However, with this template, which is a TIA template, it seems that the data exporter should complete step 1 (know your transfer) and step 2 (document the transfer tool used), for all transfers carried out, regardless of the transfer tool. In this context, the CNIL’s requirements seem to go beyond the EDPB’s requirements. If this draft guide is adopted as is, organisations that do not carry out TIAs for transfers to countries benefiting from an adequacy decision, and rightly so, will have to review their compliance strategy if they wish to align themselves with the CNIL’s more stringent requirements.
- If onward transfers are carried out by the data importer, the CNIL considers that a specific TIA should be carried out for each type of onward transfer. The EDPB recommendations are not that precise on this particular topic, since the EDPB merely states that “When mapping transfers, do not forget to also take into account onward transfers“. Covering the initial transfer and onward transfers within the same TIA does not therefore seem to be the CNIL’s recommendation. Another document will have to be prepared for each onward transfer, which increases the burden imposed on the data exporter, as described in the EDPB recommendations.
- The CNIL also increases the role and obligations of the data importer, particularly when it is acting as a processor. In its Schrems II judgment, the CJEU ruled that “controller or processor [must] verify, on a case-by-case basis and, where appropriate, in collaboration with the recipient of the data, whether the law of the third country of destination ensures adequate protection, under EU law […]“. The data importer may be a data controller or a data processor. The CNIL is rigorous towards the data importer since the CNIL indicates that the importer’s cooperation is essential for the TIA to be carried out.
- he understood the job offer was conditional and that he would have to undergo a pre-employment drug test; and
- the drug testing company informed him that cannabis was one of the substances that he was being tested for.
Law 25’s requirements and staggered compliance timeline
Law 25’s requirements become effective in phases. Below is a list of Law 25’s primary requirements and mandatory compliance dates:September 22, 2022
- Data Protection Officer (“Privacy Officer”) Appointment– In-scope organizations must appoint a Privacy Officer to oversee the data subject requests, data breach reporting, and Privacy Impact Assessment processes. The Privacy Officer need not be located in Quebec and the role can be delegated to the highest senior employee responsible for overseeing compliance. In-scope organizations must publish the name, title, and contact information for the Privacy Officer on their websites. at Section I, para. 3.1.
- Breach Reporting – In-scope organizations must notify the CAI and impacted individuals as soon as possible after discovery of a data breach that poses a “high risk of serious injury.” In-scope organizations must also maintain an internal register of all qualifying data breaches, which may be requested by the CAI. at Section I, para. 3.5.
- Disclosure of Biometric Use– In-scope organizations must disclose whether they intend to collect and/or use any biometric data within a service, product, or system to the CAI sixty (60) days prior to implementation. at Section III.
September 22, 2023
- Privacy Policy– In-scope organizations must publish a privacy policy on their websites. at Section II, para. 8.
- Privacy Impact Assessments (“PIA(s)”)– In-scope organizations must conduct a PIA when certain triggering circumstances occur, such as when Personal Information is being transferred outside of Quebec or when risky processing occurs, including the processing of Sensitive Personal Information. The PIA requirement also applies where an in-scope organization entrusts a service provider, processor, or another third party outside Quebec with the task of collecting, using, communicating, or keeping Personal Information on their behalf. at Section I, para. 3.3.
- Transparency & Consent– In-scope organizations must regularly audit their processes for collecting, storing, processing, and sharing Personal Information to ensure they are in compliance with Law 25’s requirements. Further, in-scope organizations must obtain explicit opt-in consent prior to collecting, storing, processing, and sharing Personal Information, subject to certain exceptions. at Section II, para. 12. Law 25 also requires in-scope organizations to take an opt-in approach with respect to cookies and other tracking technologies, meaning that certain cookies cannot deploy on an in-scope organization’s website without the user’s affirmative consent to the deployment of such cookies. Id. at Section II, paras. 8.1 and 9.1. Law 25 does not specify the types of cookies that will require opt-in consent but rather states that the cookies and similar tracking mechanisms whose function allows a user to be “identified, located, or profiled” be subject to the opt-in requirement. Without further guidance from the CAI on this subject, in-scope organizations should consider obtaining opt-in consent for the deployment of all non-essential cookies.
- Data Minimization– In-scope organizations must ensure Personal Information is destroyed and/or anonymized when retention is no longer reasonably necessary. at Section 3, para. 23.
- Data Subject Rights– In-scope organizations are required to permit individuals to submit, and must respond to, certain privacy rights requests, such as the right to be informed, access, rectification, withdrawal of consent, and restriction of processing. at Section I, para. 8.
September 22, 2024
- Right to Portability– In-scope organizations must be able to produce a portable record of Personal Information stored about an individual upon request by the individual.
Considerations for in-scope organizations
While Law 25 went into force without much fanfare, organizations should waste no time in considering Law 25’s applicability. The following measures should be considered when assessing the applicability of Law 25 to business operations and preparing to comply:- Understand whether Personal Information belonging to a Quebec resident has been or will be collected or processed through any offered service, product, or system;
- Ensure that when Personal Information is transferred outside of Quebec, a PIA is conducted;
- Ensure privacy notices are updated to accurately describe Personal Information collection, processing, and use;
- Determine whether appointing a Privacy Officer is necessary, if one is not already appointed in Canada and/or Quebec;
- Ensure that certain cookies or other similar tracking technologies are deployed on a website only upon affirmative opt-in by a user; and
- Develop a clear and actionable strategy for obtaining consent for processing Personal Information of Quebec residents or assess whether a consent exception can be relied upon.
MISCELLANEOUS DEVELOPMENTS
Rite Aid Settles FTC Allegations Regarding Use of Facial Recognition Technology On December 19, 2023, the Federal Trade Commission (“FTC”) announced that it reached a settlement with Rite Aid Corporation and Rite Aid Headquarters Corporation (collectively, “Rite Aid”) to resolve allegations that the companies violated Section 5 of the FTC Act (as well as a prior settlement with the agency) by failing to implement reasonable procedures to prevent harm to consumers while using facial recognition technology. As part of the settlement, Rite Aid agreed to cease using “Facial Recognition or Analysis Systems” (defined below) for five years and establish a monitoring program to address certain risks if it seeks to use such systems for certain purposes in the future. According to the FTC’s complaint, Rite Aid “used facial recognition technology in hundreds of its retail pharmacy locations to identify patrons that it had previously deemed likely to engage in shoplifting or other criminal behavior.” The FTC claimed that the technology sent alerts to Rite Aid’s employees when patrons were matched with entries in the company’s “watchlist database.” Rite Aid employees allegedly took action against patrons who triggered the matches by, for example, subjecting them to in-person surveillance. The FTC claimed that Rite Aid failed to consider or address foreseeable harm to patrons by such conduct, including failing to (1) test the technology’s accuracy, (2) enforce image quality standards necessary for the technology to function accurately, (3) take reasonable steps to train employees, and (4) “take steps to assess or address risks that its . . . [the] technology would disproportionately harm consumers because of their race, gender, or other demographic characteristics.” The proposed consent order places a number of restrictions and obligations on Rite Aid, including with respect to its use of a “Facial Recognition or Analysis System,” which it defines as “an Automated Biometric Security or Surveillance System that analyzes or uses depictions or images, descriptions, recordings, copies, measurements, or geometry of or related to an individual’s face to generate an Output.” An “Automated Biometric Security or Surveillance System,” in turn, is defined as “any machine-based system, including any computer software, application, or algorithm, that analyzes or uses Biometric Information of, from, or about individual consumers to generate an Output that relates to those consumers, notwithstanding any assistance by a human being in such analysis or use, and that is used in whole or in part for a Security or Surveillance Purpose,” subject to a few exceptions. Among other restrictions, the proposed consent order requires that Rite Aid:- not deploy or use any Facial Recognition or Analysis System for five years, either in a retail store or an online retail platform;
- delete all photos and videos of consumers used in a Facial Recognition or Analysis System, including any data, models, or algorithms derived from such information;
- prior to deploying an Automated Biometric Security or Surveillance System in the future:
- Establish and maintain a monitoring program, that among things, identifies and addresses risks that “will result, in whole or in part, in physical, financial, or reputational harm to consumers” and “any such harms [that] will disproportionately affect consumers based on race, ethnicity, gender, sex, age, or disability, alone or in combination;
- Develop mandatory notice and complaint procedures that include providing written notice to consumers whose biometric information will be enrolled in the system;
- Develop a written retention schedule that, among other things, sets a time frame of deletion for biometric information that is no greater than five years, subject to certain exceptions; and
- implement a comprehensive information security program that includes safeguards based on the “volume and sensitivity” of the information that is at risk and the likelihood that the risk could result in unauthorized collection or misuse.
- That the employer “knew or had reason to know of the particular unfitness, incompetence, or dangerous attributes of the employee and could reasonably have foreseen that such qualities create a risk of harm to other persons; and
- That, thought the negligence of the employer in hiring the employee, the latter’s incompetence, unfitness, or dangerous characteristics proximately caused the injury.”
- Legal Authority and Consent: An organization must have and document its legal authority for collecting, using, disclosing and deleting personal information in the course of training, developing, deploying, operating or decommissioning a GenAI system. Notably, the Principles assert that using GenAI to infer information about an identifiable individual constitutes a “collection” of personal information and therefore requires a valid legal authority, such as consent. When relying on consent as its legal authority, an organization must ensure that such consent is specific, “valid and meaningful”, and not obtained through deceptive design patterns. An organization that sources personal information from a third party in connection with a GenAI system must ensure that the third party has collected the personal information lawfully and has a legal authority to disclose the personal information.
- Appropriate Purposes: An organization must avoid any collection, use and disclosure of personal information for inappropriate purposes and consider whether the use of a GenAI system is appropriate for a specific application. This includes avoiding the development, putting into service, or use of a GenAI system that violates the “No-Go Zones” already identified by Canadian privacy regulators (such as for discriminatory profiling or generating content that otherwise infringes on fundamental rights), as well as potential emerging No-Go Zones identified in the Principles (such as the creation of content for malicious purposes, e.g., deep fakes).
- Necessity and Proportionality: An organization must establish the necessity and proportionality of using GenAI, and personal information within a GenAI system, to achieve the intended purpose(s). The Principles further advocate for the use of anonymized, synthetic or de-identified data, rather than personal information, in GenAI systems whenever possible.
- Openness and Transparency: An organization must be transparent about its collection, use and disclosure of personal information, as well as potential risks to individuals’ privacy, throughout the development, training and operation of a GenAI system for which the organization is responsible. This includes, for example, clearly stating the appropriate purpose(s) for such collection, use and disclosure of personal information and meaningfully identifying when system outputs that could have a significant impact on an individual or group are created by a GenAI tool. This information should be made readily available before, during and after use of the GenAI system.
- Accountability: A robust internal governance structure should be developed to ensure compliance with privacy legislation, including defined roles and responsibilities, policies and practices establishing clear expectations with respect to compliance with privacy obligations, a mechanism to receive and respond to privacy-related questions and complaints, and a commitment to regularly revisiting accountability measures (including bias testing and assessments) based on technological and regulatory developments. The Principles also recommend that an organization undertake privacy impact and/or algorithmic impact assessments to identify and mitigate potential or known impacts of a GenAI system (or its use) on privacy and other fundamental rights.
- Individual Access: The Principles emphasize individuals’ right to access and correct the personal information about them that is collected during the use of a GenAI system or that is contained within a GenAI model. Accordingly, an organization must ensure that procedures exist for individuals to exercise such rights.
- Limiting Collection, Use, and Disclosure: An organization must limit the collection, use and disclosure of personal information to what is necessary to fulfill an appropriate, identified purpose. The Principles stress that publicly accessible personal information (including personal information published online) cannot be collected or used indiscriminately, including in connection with a GenAI system. Appropriate retention schedules must also be developed for personal information contained within a GenAI system’s training data, system prompts and outputs.
- Accuracy: Personal information used in connection with GenAI systems must be as accurate, complete and up-to-date as is necessary for the purpose(s) for which it is to be used. This obligation includes, without limitation, identifying and informing users of a GenAI system about any known issues or limitations regarding the accuracy of the system’s outputs, and taking reasonable steps to ensure that outputs from a GenAI system are as accurate as necessary for their intended purpose, particularly when the outputs will be used to make (or assist in making) decisions about one or more individuals, will be used in high-risk contexts, or will be released publicly.
- Safeguards: Safeguards must be implemented to protect personal information collected or used throughout the lifecycle of a GenAI system from risks of security breaches or inappropriate use. Such safeguards must be commensurate to the sensitivity of the personal information and take into account risks specific to GenAI systems, such as prompt injection attacks, model inversion attacks and jailbreaking.
- Considering the Impact on Vulnerable Groups: When developing or deploying a GenAI system, an organization should identify and prevent risks to vulnerable groups, including children and groups that have historically experienced discrimination or bias. GenAI systems should be fair and free from biases that could lead to discriminatory outcomes. For developers, this obligation includes ensuring that training data sets do not replicate or amplify existing biases or introduce new biases. Users of GenAI systems must oversee and review the systems’ outputs and monitor for potential adverse effects, particularly when such outputs are used as part of an administrative decision-making process or in highly impactful contexts (e.g., employment, healthcare, access to finance, etc.).
- The employer had reason to know that an employee had engaged in unlawful workplace discrimination in the past but decided to retain them; and
- The employer failed to take affirmative steps such as,
- Developing and implementing anti-harassment training
- developing and implementing written policies and procedures that set the expectation that employment discrimination will not be tolerated in the workplace and provide employees with a roadmap for reporting allegedly unlawful behavior.
Let's start a conversation
At ClearStar, we are committed to your success. An important part of your employment screening program involves compliance with various laws and regulations, which is why we are providing information regarding screening requirements in certain countries, region, etc. While we are happy to provide you with this information, it is your responsibility to comply with applicable laws and to understand how such information pertains to your employment screening program. The foregoing information is not offered as legal advice but is instead offered for informational purposes. ClearStar is not a law firm and does not offer legal advice and this communication does not form an attorney client relationship. The foregoing information is therefore not intended as a substitute for the legal advice of a lawyer knowledgeable of the user’s individual circumstances or to provide legal advice. ClearStar makes no assurances regarding the accuracy, completeness, or utility of the information contained in this publication. Legislative, regulatory and case law developments regularly impact on general research and this area is evolving rapidly. ClearStar expressly disclaim any warranties or responsibility or damages associated with or arising out of the information provided herein.