November 2023 Screening Compliance Update
NOVEMBER 2023 SCREENING COMPLIANCE UPDATE
ClearStar is happy to share the below industry related articles written by subject matter experts and published on the internet in order to assist you in establishing and keeping a compliant background screening program. To subscribe to the Screening Compliance Update or to view past updates, please visit www.clearstar.net/category/screening-compliance-update/.
FEDERAL DEVELOPMENTS
To E-Verify or Not to E-Verify: Weighing the Pros and Cons Employers often have questions about whether they should use E-Verify to help determine whether their new hires are authorized to work in the United States. The program – which matches I-9 data with the information in various government databases – is voluntary for most employers but mandatory for federal contractors and some employers in certain states. Ultimately, its goal is to help employers stay compliant with federal employment and immigration regulations. But is E-Verify right for you? Consider these five pros and five cons when deciding whether to incorporate it into your hiring process. Top 5 Reasons Why Employers Should Consider Using E-Verify- Electronic Verification: When hiring a new employee, you must complete a Form I-9 and physically examine the employee’s identity and work authorization documents. Notably, however, a new benefit just became available for employers that are enrolled in E-Verifyallowing them to conduct document verification electronically rather than in person.
- Speed: Employers that use E-Verify receive an initial determination almost immediately regarding a new employee’s authorization to work.
- Good Faith Defense: When an employer confirms the identity and employment eligibility of newly hired employee using E-Verify procedures, you may rely on the system’s confirmation of that employee’s work authorized status, creating a presumption of good faith in the hiring process. This serves as an extra layer of protection and confidence when it comes to compliance.
- Easy integration: E-Verify can be integrated into an employer’s existing onboarding and HR processes and can also be accessed online.
- Government Benefits: Depending on the state, employers may receive state contracts, grants, or incentives for using E-Verify. Additionally, enrollment in E-Verify is a requirement for being awarded certain federal government contracts.
- Drain on Resources: Employers will need to learn how to use E-Verify, stay up to date with ever-changing regulations, and ensure their IT systems can manage the process and keep up with changes.
- False Positives and Negatives: E-Verify is not foolproof, and errors can occur. The system may sometimes flag individuals who are authorized to work (false positives) or fail to identify unauthorized workers (false negatives).
- Privacy Concerns: E-Verify involves the collection and storage of sensitive personal information, such as Social Security numbers. This has raised concerns about privacy and the potential for identity theft or misuse of this information.
- Extra Management: Employers that use E-Verify must consistently use the system for all employees and subject themselves to ICE audits to verify both I-9 andE-Verify compliance. Using E-Verify does not decrease the chance of an I-9 audit.
- Government Shutdown: Employers should consider recent and future threats of government shutdowns, as E-Verify is not available while the federal government is not operating. Although E-Verify will be available after a shutdown ends, there is a period where employers may not submit data.
- Instruct the employee to transmit a copy of the document(s) for review.
- Examine copies of Form I-9 documents or an acceptable receipt to ensure that the documentation presented reasonably appears to be genuine and relates to the employee.
- Conduct a live video interaction with the employee presenting the document(s) to ensure that the documentation reasonably appears to be genuine and relates to the employee. The employee must present the same documents during the live video interaction that were previously transmitted for review.
- Retain a clear and legible copy of the documentation.
- On the Form I-9, check the box to indicate that you used an alternative procedure in the Additional Information field in Section 2.
- If completing the remote documentation examination for a rehire or reverification, check the box on Form I-9 in Supplement B.
- Reduced Sections 1 and 2 to a single sheet. No previous fields were removed. Multiple fields were merged into fewer fields when possible, such as in the employer certification.
- Moved the Section 1 Preparer/Translator Certification area to a separate Supplement A that employers can use when necessary. This supplement provides three areas for current and future preparers and translators to complete as needed.
- Moved Section 3 Reverification and Rehire to a standalone Supplement B that employers can use for rehire or reverification. This supplement provides four areas for current and subsequent reverifications. Employers may attach additional supplements as needed.
- Removed use of “alien authorized to work” in Section 1, replaced it with “noncitizen authorized to work” and clarified the difference between “noncitizen national” and “noncitizen authorized to work.”
- Ensured the form can be filled out on tablets and mobile devices by downloading onto the device and opening in the free Adobe Acrobat Reader app.
- Removed certain features to ensure the form can be downloaded easily. This also removes the requirement to enter N/A in certain fields.
- Improved guidance to the Lists of Acceptable Documents to include some acceptable receipts, as well as guidance and links to information on automatic extensions of employment authorization documentation.
- Added a checkbox for E-Verify employers to indicate when they have remotely examined Form I-9 documents.
- Reduced length from 15 pages to 8 pages.
- Added definitions of key actors in the Form I-9 process.
- Streamlined the steps each actor takes to complete their section of the form.
- Added instructions for the new checkbox to indicate when Form I-9 documents were remotely examined.
- Removed the abbreviations charts and relocated them to the M-274 (Handbook for Employers).
Background
The FTC issued the first version of the Safeguards Rule in 2002 pursuant to the Gramm-Leach-Bliley Act (GLBA). Under GLBA, various federal agencies including the FTC, the U.S. Securities and Exchange Commission, the federal banking regulators—the Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, and the Federal Reserve Board—and the National Credit Union Administration, are required to issue standards for the security of customer information for financial institutions subject to each agency’s jurisdiction.[1] The first version of the Safeguards Rule imposed relatively high-level requirements on covered institutions to implement a written information security program, including designating a qualified individual to lead the program, identifying information security risks, implementing and testing safeguards in response to those risks, overseeing service providers, and periodically adjusting the program based on changes to the business and other circumstances. In December 2021, the FTC overhauled the Safeguards Rule by expanding the existing requirements and enumerating new, more detailed ones. Under the current Safeguards Rule, which we discussed in a prior blog post and webinar, institutions must adopt various safeguards, including encrypting customer information in transit and at rest, multifactor authentication, secure software development as assessment measures, and annual written reports to the board of directors (or other governing body) regarding the institution’s information security program and material security risks, among others. The FTC’s overhauled Safeguards Rule did not include any breach notification requirement. However, on the same day the FTC published the new Safeguards Rule, December 9, 2021, it also issued a Supplemental Notice of Proposed Rulemaking (SNPRM) to amend the Safeguards Rule to add breach notification.[2] The FTC issued the Notification Requirement in a final rule published on October 27, 2023 (the “Final Rule”). The FTC published the Final Rule shortly after the release by the Consumer Financial Protection Bureau (CFPB) of its proposed “Personal Financial Data Rights” rule under Section 1033 of the Consumer Financial Protection Act of 2010. The CFPB’s proposed rule would require data providers and third parties not otherwise subject to GLBA to comply with the FTC’s Safeguards Rule (we discuss the CFPB’s proposal here), now including the Notification Requirement.Covered Information
The Notification Requirement dramatically expands covered financial institutions’ breach reporting obligations because of the range of data covered. The Notification Requirement applies to “customer information,” which is broadly defined in the Safeguards Rule as records containing “nonpublic personal information about a customer of a financial institution.” Nonpublic personal information is (i) personally identifiable financial information[3] and (ii) “[a]ny list, description, or other grouping of consumers (and publicly available information pertaining to them) that is derived using any personally identifiable financial information that is not publicly available.” Customer information may include a broad array of data, from more sensitive types of data such as Social Security numbers, detailed financial and purchase histories, and account access information, to relatively routine and benign data, such as basic customer demographics and contact details. Under state data breach reporting laws, companies are required to report breaches of only enumerated categories of data, such as Social Security numbers and other government-issued ID numbers, financial account numbers in combination with access credentials, usernames and passwords, and medical information. But given the broad definition of customer information under the Safeguards Rule, covered financial institutions will have to assess their breach reporting obligations for a much larger set of data than they typically do now.[4] At the same time, it is important to note that the Safeguards Rule, and therefore the Notification Requirement, does not apply to information about “consumers” who are not “customers.” Under the Safeguards Rule, a “consumer” is any individual that obtains a financial product or service from a financial institution to be used for a personal, family, or household purpose.” A “customer” is a type of consumer: specifically, a consumer with which the financial institution has a “customer relationship,” defined as a “continuing relationship” between the institution and customer under which the institution provides a financial product or service. No customer relationship may exist, for example, where a consumer engages in only “isolated transactions” with the institution, such as by purchasing a money order or making a wire transfer. The Notification Requirement applies only to customer information, and therefore is not triggered by a breach affecting only consumers who are not customers.Covered Incidents
A “notification event” is defined as “acquisition of unencrypted customer information without the authorization of the individual to which the information pertains (emphasis added).” This definition raises several points for consideration:- Acquisition: The Notification Requirement is triggered by unauthorized “acquisition” and includes a rebuttable presumption that unauthorized “access” is unauthorized acquisition unless the institution has “reliable evidence” showing that acquisition could not reasonably have occurred. On the surface, the Notification Requirement takes a sort of middle approach vis-à-vis state data breach notification laws: under most state laws, personal data must be acquiredto trigger notification obligations, but a small and growing number of states require notification where personal data has only been accessed.[5] However, it is important to note that the FTC has a very broad view of those terms. The FTC describes “acquisition” as “the actual viewing or reading of the data,” even if the data is not copied or downloaded, and “access” as merely “the opportunity to view the data”[6] (emphasis added). Based on the FTC’s reading of those terms, the rebuttable presumption may only be available if an institution has reliable evidence that unauthorized actors did not actually view customer information—even if they had the opportunity to do so.
- Unencrypted: The Notification Requirement treats encrypted data much like state data breach notification laws do. Institutions need not report acquisitions of encrypted data; however, encrypted data is considered unencrypted for the purposes of the Notification Requirement if the encryption key was accessed by an unauthorized person.
- Without Authorization of the Individual to Which the Information Pertains: Typically, when breach notification laws refer to acquisition of data being unauthorized, it is understood that they are referring to whether the acquisition was authorized by the entity that owns the data, not whether it was authorized by the individual who is the subject of the data. By specifying that a notification event occurs when acquisition was unauthorized by the individual data subject, the Notification Requirement potentially encompasses a broader range of incidents than state data breach notification laws. If, for example, a financial institution’s employee uses customer information for a purpose that is authorized by the institution but inconsistent with the institution’s privacy statement or customer agreement, one could argue that the use is acquisition not authorized by the consumer. Whether the FTC would take that novel position remains to be seen. Notably, the FTC’s Health Data Breach Rule(HNBR) includes similar language in its definition of “breach of security,”[7] and the FTC has taken the position that the HNBR applies to disclosures authorized by company holding the data but not the data subject.
Notification Obligation
Financial institutions must notify the FTC “as soon as possible, and no later than 30 days after discovery” of a notification event involving at least 500 consumers. Although not clear from the text of the amendments, the FTC appears to take the position that the Notification Requirement begins to run when an institution discovers that a notification event has occurred, and not when it discovers specifically that the notification event affects 500 or more consumers. The FTC dismissed concerns that a financial institution may not know how many consumers were affected, or other key information such as whether information was only accessed without acquisition, at the time it discovers a data breach, stating that it expects financial institutions “will be able to decide quickly whether a notification event has occurred.” Where it is difficult to ascertain how many consumers may have been affected—for example, where a data breach affected unstructured data containing an unknown amount of consumer data—institutions may face significant time pressures to meet the 30-day reporting requirement. The Notification Requirement does not include any “risk of harm” analysis or threshold. Under the SNPRM, financial institutions would have been required to notify the FTC only where “misuse” of customer information had occurred or was “reasonably likely” to occur. The final version of the Notification Requirement removes the misuse language and simply requires notification upon discovery that customer information has been “acquired” without authorization. The Notification Requirement is surprisingly silent on financial institutions’ obligations when data breaches occur at their service providers.[8] A financial institution is considered to have discovered a notification incident “if such event is known to any person, other than the person committing the breach, who is [the institution’s] employee, officer, or other agent.” This language indicates that financial institutions are not considered to have knowledge of a notification event that occurred at a service provider (which would not typically be considered the financial institution’s “agent”) until the service provider makes the institution aware of the event. Although there is no specific requirement that institutions obligate their vendors to notify them of security incidents, the Safeguards Rule does require institutions to oversee their service providers, including by entering into contracts requiring service providers to maintain appropriate security safeguards for customer information. The FTC may take the position that financial institutions must require their service providers to report notification events to them under these broader service provider oversight obligations. Additionally, the FTC might argue that because customer information is defined to include information “that is handled or maintained by or on behalf of” a financial institution, institutions’ responsibility for third-party notification events is assumed.Report Requirements and Publication
Notifications to the FTC, which must be submitted via electronic form on the FTC website, must include the following information:- The name and contact information of the reporting financial institution;
- A description of the types of information that were involved in the notification event;
- If the information is possible to determine, the date or date range of the notification event;
- The number of consumers affected;
- A general description of the notification event; and
- If applicable, whether any law enforcement official has provided the financial institution with a written determination that notifying the public of the breach would impede a criminal investigation or cause damage to national security, and a means for the Federal Trade Commission to contact the law enforcement official. A law enforcement official may request a delay in publication of the report for up to 30 days. The delay may be extended for an additional 60 days in response to a written request from the law enforcement official. Any further delay is only permitted if the FTC staff “determines that public disclosure of a security event continues to impede a criminal investigation or cause damage to national security.”
Preparing for Compliance
Financial institutions subject to the Safeguards Rule are advised to consider the following steps for preparing to comply with the Notification Requirement:- Assess Safeguards Rule Compliance and Address Gaps Now: The FTC issued the Notification Requirement to support its enforcement efforts.[9]The FTC intends to review breach reports and assess whether a breach may have been the result of an institution’s failure to comply with the Safeguards Rule’s technical, administrative, and physical safeguards. Institutions should prepare for this increased scrutiny by assessing and remedying any compliance gaps with the Safeguards Rule. The FTC acknowledges that a breach may occur even if an institution fully complies with the Safeguards Rule, so institutions should be prepared to show the FTC that the notification incident occurred notwithstanding their compliance with the rule.[10]
- Review and Update Incident Response Plans. The Notification Requirement dramatically expands covered financial institutions’ breach reporting obligations. Under state data breach reporting laws, companies are required to report breaches of only enumerated categories of data, such as Social Security numbers and other government-issued ID numbers, financial account numbers in combination with access credentials, usernames and passwords, and medical information. But given the broad definition of customer information under the Safeguards Rule, covered financial institutions will have to assess their breach reporting obligations for a much larger set of data. Institutions should update their incident response plans to address these expanded obligations and educate their incident response teams about them. Institutions also should determine who will be responsible for submitting any required report to the FTC. Reports should be reviewed by counsel prior to submission, given that they may form the basis for FTC enforcement or consumer class actions.[11]
- Revise Any Data Maps, Information Classification Schemes and Similar Documentation. Financial institutions also should review their data maps, data inventories, information classification schemes, and similar data management documentation to ensure that they properly address the many types of records that may be considered “customer information” containing “non-public personal information” subject to the Notification Requirement. Doing so will help financial institutions more quickly assess the impact of a security incident and determine whether it is a “notification event” under the amended Safeguards Rule (for example, by informing them of whether customer information may be present on a compromised system). Quick assessment will be important given the 30-day notification deadline, and that the FTC appears not to distinguish between when an institution becomes aware of a notification event and when it determines that the event triggers the reporting obligation.
- Assess and Amend Service Provider Agreements.Although there is no specific requirement in the Safeguards Rule that institutions obligate their service providers to notify them of notification events, the FTC may argue that such an obligation is assumed by the Safeguards Rule provisions. Accordingly, financial institutions should review their relevant service provider agreements and determine whether any amendments are necessary to support their compliance with the Notification Requirement.
STATE, CITY, COUNTY AND MUNICIPAL DEVELOPMENTS
U.S. State Privacy Impact Assessment (PIA/DPIA) Requirements With the passage of numerous comprehensive state laws, many U.S. companies are now subject to a formal requirement to complete a Privacy Impact Assessment (“PIA”). While the various state and international PIA requirements may seem daunting, it is possible to align an organization’s PIA process to the most nuanced laws and achieve a baseline founded on the consistency across the states. Below are the core concepts that you should be familiar with. See Kilpatrick Townsend’s recent Legal Alert for the answers to some commonly asked questions and practical suggestions for approaching the PIA requirements landscape. Core Concepts/Key Information At a Glance- Many states follow a “baseline” model which provides that PIAs are generally required before processing personal data in a manner that presents a heightened risk of harm to consumers.
- “PIA” is a broad term for privacy evaluations that also covers more targeted assessments, such as GDPR or GDPR-style data protection impact assessments (DPIAs). U.S. state laws often refer to PIAs as data protection assessments. PIAs are a means of documenting details around personal data use cases / processing activities and are essentially risk/benefit analyses.
- Heightened risk of harm generally includes (but is not limited to) activities involving targeted advertising, profiling, sale of personal data, and handling sensitive personal data.
- Colorado has documented a set of detailed PIA requirements via regulation, and California is expected to finalize a set of detailed requirements for privacy risk assessments very soon.
- For U.S. based companies, model the overall PIA process on the “baseline states”. Focus on the common factors triggering PIAs. Layer on CA and CO specific requirements where applicable. If the company plans to expand globally, be sure to include questions about the jurisdictions in which they will be operating.
- Identify additional likely candidates for “high-risk” / “heightened risk” processing based on what the organization does (e.g., the company’s business model, data handling, industry, etc.).
- If the company also has GDPR or other global exposure and an established GDPR PIA/DPIA template in place, build in screening questions to see if additional assessments/questions are needed for the U.S. states.
- Include or be prepared to include questions related to AI / ADMT.
- Continue to monitor for developments in the U.S. state privacy arena, as well as municipal-level or topic-specific requirements.
- Pay Transparency in Job Postings: Employers with 25+ employees in Massachusetts must post the pay range in internal and external job postings (including recruitment for new hires via a third party). Pay range is defined as the annual salary or hourly wage range the employer “reasonably” and “in good faith” expects to pay for the position. Crafting a position’s reasonable pay range is just one of many potential issues a covered employer must consider when complying with pay transparency legislation, which we discuss in more detail here. Notably, neither the House Bill nor the Senate Bill requires employers to disclose other compensation, including bonuses, commissions or other employee benefits for advertised positions.
- Annual Wage Data Reporting: Employers with 100+ full-time employees in Massachusetts at any time during the preceding year, and who are subject to the federal filing requirements of wage data reports (EEO-1, EEO-3, EEO-4 or EEO-5), must submit a copy of their federal filings to the state secretary. This data, which reflects workforce demographics and salaries, will then be submitted to the executive office of labor and workforce development.
- Allows adults who are at least 21 years old to use and possess marijuana, including up to 2.5 ounces of marijuana;
- Allows the sale and purchase of marijuana, which a new Division of Cannabis Control would regulate; and
- Enacts a 10 percent tax on marijuana sales.
- The Clean Slate Act calls for eligible misdemeanor convictions to be sealed after three years from an individual’s satisfaction of a sentence and eligible felony convictions to be sealed after eight years from an individual’s satisfaction of a sentence.
- The New York State Human Rights Law has been amended to prohibit discrimination based on a sealed conviction, subject to limited exceptions.
- The law will likely have an impact on employer background checks and hiring practices.
- there was a duty of care owed to the individual with the sealed conviction;
- the person knowingly and willfully breached such duty;
- the disclosure caused injury to the individual; and
- the “breach of that duty was a substantial factor in the events that caused the injury suffered by such person.”
COURT CASES
Boston settles drug testing discrimination suit for $2.6M Boston has settled a decades-old lawsuit over discriminatory hair drug testing for $2.6 million. The test at the heart of the lawsuit was one employed by the Boston Police Department to detect the presence of controlled substances in hair follicles, which the plaintiffs in the nearly 20-year-old lawsuit argued came back with disproportionate numbers of false positives for Black people. Experts in the case testified that not only was the test unable to reliably distinguish whether drug remnants found in hair were the result of ingestion — which would be the point for the testing — or from exterior contamination. What led to the disproportionate number of Black testees returning false positives, experts argued, was because of the unique texture of their hair as well as commonly used grooming products led to more likely external contamination. “This settlement puts an end to a long, ugly chapter in Boston’s history,” said Oren Sellstrom, Litigation Director at Lawyers for Civil Rights, one of the two firms who represented the Black police officer plaintiffs, in an emailed statement. “As a result of this flawed test, our clients’ lives and careers were completely derailed. The City has finally compensated them for this grave injustice.” Lawyers for Civil Rights, which says it has represented the plaintiffs in the case since the beginning, announced the settlement Thursday. The settlement will pay out an equal portion of the money — $650,000 — to each of the four plaintiffs. The law firm WilmerHale also represented the plaintiffs on a pro bono basis. The test, which has been administered since at least 1999 according to prior Herald reporting, was administered by Acton-based Psychemedics, which has also been involved in lawsuits with the city. Herald efforts to reach a company representative for comment Thursday were unsuccessful. Mayor Michelle Wu said “this settlement marks the end to an important process to guarantee that every officer is treated fairly. “Under (Police) Commissioner Michael Cox’s leadership, we are strengthening our entire department by building more trust within the department and with (the) community and supporting a workforce that reflects the communities we serve,” her statement continued. The settlement nearly doubles the amount of money the plaintiffs’ lawyers say the city has shelled out already in fighting the various lawsuits against the controversial test, as they say the city has spent some $2.1 million in legal fees. The settlement was also warmly received by the police unions the Massachusetts Association of Minority Law Enforcement Officers and the Boston Police Patrolmen’s Association. “The hair test not only wreaked havoc on the lives of many Black officers, it also deprived Boston residents of exemplary police officers,” Jeffrey Lopes, MAMLEO president, said. “The City is still trying to make up for the loss of diversity on the police force that resulted from use of the hair test.” Likewise, BPPA President Larry Calderone said his organization “couldn’t be happier with the decision. Thankfully, the award gives closure and vindication to police officers who pushed back and helped pave the way to eliminate a faulty hair follicle drug testing procedure.” Click Here for the Original ArticleINTERNATIONAL DEVELOPMENTS
Canada: Salary or wage ranges must be included in publicly advertised jobs in British Columbia As of 1 November 2023, all employers in British Columbia must specify the expected salary or wage range for all publicly advertised job opportunities. The government of British Columbia recently published a guidance document clarifying this requirement (the Guidelines). In a previous post, we explained that the British Columbia Pay Transparency Act, S.B.C. 2023, c.18 (PTA) received Royal Assent on 11 May 2023. The PTA was introduced to help address inequalities and close the gender pay gap between men and women in British Columbia. While the Guidelines do not have the same legal force as the PTA itself or any regulations that are published under the PTA in the future, the Guidelines are nevertheless a helpful clarification and insight into what will be expected of employers with respect to publishing salary or wage information. The key takeaways for employers from the Guidelines are:- Employers are only required to include an employee’s expected base salary or wage in a job posting. Employers can voluntarily include additional details beyond the base salary or wage such as bonuses, benefits, commission, tips or overtime pay.
- The salary or wage range must have a specified minimum and maximum. For example, an employer would not be compliant with the PTA if the job advertisement described the salary or wage range as ‘up to CAD20 per hour‘ or ‘CAD20 per hour and up‘. The examples provided of acceptable ranges include ‘CAD20-CAD30 per hour’or ‘CAD40,000 – CAD60,000 per year‘. Currently, there are no guidelines as to how large the expected salary or wage range can be in a job advertisement, although limits may be set out by regulations in the future.
- Employers and applicants are not restricted by the expected salary or wage range advertised. Applicants can request a higher salary or wage than advertised. Similarly, employers can agree to pay a higher salary or wage than what was publicly advertised.
- The requirement to publish salary or wage information under the PTA applies to jobs advertised in jurisdictions outside of British Columbia as well, so long as the job in question is open to British Columbia residents and can be filled by someone living in British Columbia, either in-person or remotely.
- The requirement to publish salary or wage information applies to jobs posted by third parties on job search websites, job boards and other recruitment platforms on behalf of the employer.
- The requirement to publish salary or wage information does not apply to general ‘help wanted‘ posters and recruitment campaigns that do not mention specific job opportunities; or job postings that are not posted publicly.
MISCELLANEOUS DEVELOPMENTS
Automation & Employment Discrimination Employers are increasingly using some form of Artificial Intelligence (“AI”) in employment processes and decisions. Per the Equal Employment Opportunities Commission (“EEOC”), examples include: [R]esume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; ‘virtual assistants’ or ‘chatbots’ that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides ‘job fit’ scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived ‘cultural fit’ based on their performance on a game or on a more traditional test. EEOC-NVTA-2023-2, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, Equal Emp’t Opportunity Comm’n, (issued May 18, 2023) (last visited November 17, 2023). As the agency tasked with enforcing and promulgating regulations regarding, federal antidiscrimination laws, the EEOC is concerned how AI could result in discrimination in the workplace. While AI can be present throughout the work lifecycle, from job posting through termination, the EEOC has devoted particular attention to applicant and application sorting and recommendation procedures. If not executed and monitored correctly, the use of AI in these processes could result in discriminatory impact (a.k.a. disparate treatment) for certain protected classes. These claims arise when employers use facially neutral tools/policies, but the application of the same results in some different treatment or impact on a particular protected class. For example, if an application sorting system automatically discards the applications of individuals who have one or more gaps in employment, the result could be that women (due to pregnancy and childbirth-related constraints) and applicants with disabilities are rejected at a higher rate than males and “able-bodied” applicants. In this circumstance, the employer “doesn’t know what it doesn’t know” and would likely be unaware that some women and applicants with disabilities were pre-sorted before review. While the employer may not have intended for this outcome, it could nevertheless be found to have violated Title VII of the Civil Rights Act (“Title VII”) and the Americans with Disabilities Act (“ADA”). Ironically, many employment AI tools are marketed as bias-eliminating because some can operate through data de-identification—a process by which protected class information is removed from application information. For example, as a general matter, applicants with “ethnic sounding” names are less likely to receive call backs than those with Anglican sounding names, like the John Smiths of the world. By replacing application names with numbers, implicit bias is less likely to creep in. Although data de-identification is one tool for avoiding bias in employment decisions, it is not a cure-all and can sometimes backfire. For example, data de-identification could result in an employer being ignorant of the disparate impact caused by its policies. Take the name example. Names often are not the only indicators of race or culture. Presume the HR professional reviewing applications is not well versed in Historically Black Colleges and Universities (“HBCU”), and when the professional does not recognize the name of an HBCU on “John Smith’s” application, she moves it to the bottom of the pile. Of course, there are other more subtle race/ethnicity data points that could trigger subconscious bias (e.g., residence, prior job experience, etc.). While there is not one solution to this complex problem, auditing application systems can at least put the employer on notice that something may need to be changed. When developing an AI utilization strategy, employers must be mindful of the complexity of both the systems and the law. Click Here for the Original Article FTC Authorizes Use of Civil Investigative Demands (CIDs) for AI-related Products and Services On November 21, 2023, the Federal Trade Commission announced that it has approved an omnibus resolution authorizing the use of compulsory process in non-public investigations involving products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use. The omnibus resolution will streamline the FTC staff’s ability to issue civil investigative demands (CIDs), which are a form of compulsory process similar to a subpoena, in investigations relating to AI, while retaining the agency’s authority to determine when CIDs are issued. The FTC issues CIDs to obtain documents, information and testimony that advance FTC consumer protection and competition investigations. The omnibus resolution will be in effect for 10 years. AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI. Although AI, including generative AI, offers many beneficial uses, it can also be used to engage in fraud, deception, infringements on privacy and other unfair practices, which may violate the FTC Act and other laws. At the same time, AI can raise competition issues in a variety of ways, including if one or just a few companies control the essential inputs or technologies that underpin AI. Click Here for the Original Article Discrimination and bias in AI recruitment: a case study Barely a day goes by without the media reporting the potential benefits of or threats from AI. One of the common concerns is the propensity of AI systems to return biased or discriminatory outcomes. By working through a case study about the use of AI in recruitment, we examine the risks of unlawful discrimination and how that might be challenged in a UK employment tribunal.Introduction
Our case study begins with candidates submitting job applications which are to be reviewed and ‘profiled’ by an AI system (the automated processing of personal data to analyse or evaluate people, including to predict their performance at work). We follow this through to the disposal of resulting employment tribunal claims from the unsuccessful candidates, and examine the risks of unlawful discrimination in using these systems. What emerges are the practical and procedural challenges for claimants and respondents (defendants) arising from litigation procedures that are ill-equipped for an automated world.Bias and discrimination
Before looking at the facts, we consider the concepts of bias and discrimination in automated decision-making. The Discussion Paper published for the AI Safety Summit organised by the UK government and held at Bletchley Park on 1 and 2 November highlighted the risks of bias and discrimination and commented: Frontier AI models can contain and magnify biases ingrained in the data they are trained on, reflecting societal and historical inequalities and stereotypes. These biases, often subtle and deeply embedded, compromise the equitable and ethical use of AI systems, making it difficult for AI to improve fairness in decisions. Removing attributes like race and gender from training data has generally proven ineffective as a remedy for algorithmic bias, as models can infer these attributes from other information such as names, locations, and other seemingly unrelated factors. What is bias and what is discrimination? Much attention has been paid to the potential for bias and discrimination in automated decision-making. Bias and discrimination are not synonymous but often overlap. Not all bias amounts to discrimination and not all discrimination reflects bias. A solution can be biased if it leads to inaccurate or unfair outcomes. A solution can be discriminatory if it disadvantages certain groups. A solution is unlawfully discriminatory if it disadvantages protected groups in breach of equality law. How can bias and discrimination taint automated decision-making? Bias can creep into an AI selection tool in a number of ways. For example, there can be: historical bias; sampling bias; measurement bias; evaluation bias; aggregation bias; and deployment bias. To give a recent example, the shortlist of six titles for the 2023 Booker Prize included three titles by authors with the first name ‘Paul’. An AI programme asked to predict works to be shortlisted for this prize, is likely to identify being called ‘Paul’ as a key factor. Of course, being called Paul will not have contributed to their shortlisting and identification of this as a determining factor amounts to bias. In doing so, the AI tool would be identifying a correlating factor which had not actually been a factor in the shortlisting; the tool’s prediction would therefore be biased as it would be inaccurate and unfair. In this case this bias is also potentially discriminatory as Paul is generally a male name and possibly also discriminatory on grounds of ethnicity and religion. An algorithm can be tainted by historical bias or discrimination. AI algorithms are trained using past data. A recruitment algorithm takes data from past candidates and there will always be a risk of under-representation of particular groups in that training data. Bias and discrimination is even more likely to arise from the definition of success which the algorithm seeks to replicate based on successful recruitment in the past. There is an obvious risk of past discrimination being embedded in any algorithm. This process presents the risk of random correlations being identified by AI algorithm, and there a several reported examples of this happening. One example from several years ago is an algorithm which identified being called Jared as one of the strongest correlates of success in a job. Correlation is not always causation. An outcome may potentially be discriminatory but not be unfair or inaccurate and so not biased. If, say, a recruitment application concluded that a factor in selecting the best candidates was having at least ten years’ relevant experience, this would disadvantage younger candidates and a younger candidate may be excluded even if, in all other respects, they would be a strong candidate. This would be unlawful if it could not be justified on the facts. It would not, however, necessarily be a biased outcome. There has been much academic debate on the effectiveness of AI in eliminating the sub-conscious bias of human subjectivity. Supporters argue that any conscious or sub-conscious bias is much reduced by AI. Critics argue that AI merely embeds and exaggerates historic bias. The law Currently in the UK there are no AI specific laws regulating the use of AI in employment. The key relevant provisions at present are equality laws and data privacy laws. This case study focuses on discrimination claims under the Equality Act 2010.The case study
Acquiring the shortlisting tool
Money Bank gets many hundreds of applicants every year for its annual recruitment of 20 financial analysts to be based in its offices in the City of London. Shortlisting takes time and costly HR resources. Further, Money Bank is not satisfied with the suitability of the candidates shortlisted each year. Money Bank, therefore, acquires an AI shortlisting tool, GetBestTalent, from a leading provider, CaliforniaAI, to incorporate into its shortlisting process. CaliforniaAI is based in Silicon Valley in California and has no business presence in the UK. Money Bank is attracted by CaliforniaAI’s promises that GetBestTalent will identify better candidates, more quickly and more cheaply than by relying on human-decision makers. Money Bank is also reassured that CaliforniaAI’s publicity material states that GetBestTalent has been audited to ensure that it is bias and discrimination-free. Money Bank was sued recently by an unsuccessful job applicant claiming that they were unlawfully discriminated against when rejected for a post. This case was settled but proved costly and time-consuming to defend. Money Bank wants, at all costs, to avoid further claims.Data protection impact assessment
Money Bank’s Data Protection Officer (DPO) conducts a data protection impact assessment (DPIA) into the proposed use by Money Bank of GetBestTalent given the presence of various high-risk indicators, including the innovative nature of the technology and profiling. Proposed mitigations following this assessment include bolstering transparency around the use of automation by explaining clearly that it will form part of the shortlisting process; ensuring that an HR professional will review all successful applications; and confirming with CaliforniaAI that the system is audited for bias and discrimination. On that basis, the DPO considers that the shortlisting decisions are not ‘solely automated’ and is satisfied that Money Bank’s proposed use of the system complies with UK data protection laws (this case study does not consider the extent to which the DPO is correct in considering Money Bank’s GDPR obligations to have been satisfied in this case). Money Bank enters into a data processing agreement with CaliforniaAI that complies with UK GDPR requirements. Money Bank also notes that CaliforniaAI is self-certified as compliant with the UK extension to the EU-US Data Privacy Framework.AI and recruitment
GetBestTalent is an off-the-shelf product and CaliforniaAI’s best seller. It has been developed for markets globally and used for many years though it is updated by the developers periodically. The use of algorithms, and the use of AI in HR systems specifically, is not new but has been growing rapidly in recent years. It is being used at different stages of the recruitment process but one of the most common applications of AI by HR is to shortlist vast numbers of candidates down to a manageable number. AI shortlisting tools can be bespoke (developed specifically for the client); off-the-shelf; or based on an off-the-shelf system but adapted for the client. GetBestTalent algorithm is based on ‘supervised learning’ where the input data and desired output are known and the machine learning method identifies the best way of achieving the output from the inputted data. This application is ‘static’ in that it only changes when CaliforniaAI developer’s make changes to the algorithm. Other systems, known as dynamic systems, can be more sophisticated and continuously learn how to make the algorithm more effective at achieving its purpose.Sifting applicants
This year 800 candidates apply for the 20 financial analyst positions at Money Bank. Candidates are all advised that Money Bank will be using automated profiling as part of the recruitment process. Alice, Frank and James are unsuccessful, and all considered themselves strong candidates with the qualifications and experiences advertised for the role. Alice is female, Frank is black, and James is 61 years old. Each is perplexed at their rejection and concerned that their rejection was unlawfully discriminatory. All three are suspicious of automated decision-making and have had read or heard about concerns about these systems.Discrimination claims in the employment tribunal
Alice, Frank and James each contact Money Bank challenging their rejection. Money Bank asks one of its HR professionals, Nadine, to look at each of the applications. There is little obvious to differentiate these applications from the shortlisted candidates – and Nadine cannot see that they are obviously stronger – so confirms the results of the shortlisting process. The Bank responds to Alice, Frank and James saying that it has reviewed the rejections, and that it uses a reputable AI system which they are reassured does not discriminate unlawfully but they do not have any more information as the criteria used are developed by the algorithm and are not visible to Money Bank. The data processing agreement between Money Bank and CaliforniaAI requires CaliforniaAI (as processor) to assist Money Bank to fulfil its obligation (as controller) to respond to rights requests, but does not specifically require CaliforniaAI to provide detailed information on the logic behind the profiling nor its application to individual candidates. Alice, Frank and James all start employment tribunal proceedings in the UK claiming, respectively sex, race and age discrimination in breach of the UK’s Equality Act. They:- claim direct and indirect discrimination against Money Bank; and
- sue CaliforniaAI for inducing and/or causing Money Bank to discriminate against them.
Disclosure
Alice, Frank and James recognise that, despite their suspicions, they will need more evidence to back up their claims. They, therefore, contact Money Bank and CaliforniaAI asking for disclosure of documents with the data and information relevant to their rejections. They also write to Money Bank and California AI with data subject access requests (DSARs) making similar requests for data. These requests are made by reason of their rights under UK data protection law over which the employment tribunal has no jurisdiction so is independent of their employment tribunal claims. Disparate impact data In order to seek to establish discrimination, each candidate requests data:- Alice asks Money Bank for documents showing the data on both the total proportion of candidates, and the proportion of successful candidates, who were women. This is needed to establish her claim of indirect sex discrimination.
- Frank asks for the same in respect of the Black, Black British, Caribbean or African ethnic group.
- James asks for the data for both over 60-year-olds and over 50-year-olds.
- a copy of the algorithm used in the shortlisting programme;
- the logic and factors used by the algorithm in achieving is output (i.e. explainability information relating to their individual decisions); and
- the results of the discrimination audit.
What did the data show?
The data provided by Money Bank shows that of the 800 job applicants: 320 were women (40%) and 480 were men (60%); 80 described their ethnicity as Black, Black British, Caribbean or African (10%); and James was the only applicant over the age of 50. Of the 320 women, only four were successful (20% of total shortlisted) whereas 16 men were shortlisted (80% of shortlisted). Of the 80 applicants from Frank’s ethnic group, three were appointed (15% of successful applicants). Therefore, the data shows that the system had a disparate impact against women but not against Black, Black British, Caribbean or African candidates. There was no data to help James with an indirect discrimination claim.| Number of candidates (total 800) | % of candidates | Number of successful candidates (total 20) | % of successful candidates | |
| Gender – female candidates | 320 | 40% | 4 | 20% |
| Ethnicity – Black, Black British, Caribbean or African candidates | 80 | 10% | 3 | 15% |
| Age – candidates over 50 years old | 1 | <1% | 0 | 0% |
Establishing indirect discrimination
Alice needs to establish:- a provision, criterion or practice (PCP);
- that the PCP has a disparate impact on women;
- that she is disadvantaged by the application of the PCP; and
- that the PCP is not objectively justifiable.
- PCP
- Disparate impact
- Disadvantages protected group and claimant
- Justification
Establishing direct discrimination
Alice is also pursuing a direct sex discrimination claim and Frank and James, not deterred by the failure to get their indirect discrimination claims off the ground, have also continued their direct race and age discrimination claims respectively. The advantage for Alice in pursuing a direct discrimination claim is that this discrimination (unlike indirect discrimination) cannot be justified, and the fact of direct discrimination is enough to win her case. Each applicant has to show that they were treated less favourably (i.e. not shortlisted) because of their protected characteristic (sex, race, age respectively). To do this, the reason for the decision not to shortlist must be established. They have no evidence of the reason, but this does not necessarily defeat their claims. Under UK equality law, the burden of proof can, in some circumstances, transfer so that it is for the employer to prove that it did not discriminate. To prove this, the employer would then have to establish the reason and show that it was not the protected characteristic of the claimant in question. In this case, this would be very difficult for Money Bank as it does not know why the candidates were not shortlisted. What is required for the burden of proof to transfer? The burden of proof will transfer if there are facts from which the court could decide that discrimination occurred. This is generally paraphrased as the drawing of inferences of discrimination from the facts. If inferences can be drawn, the employer will need to show that there was not discrimination.Prospects of success
Looking at each claimant in turn:- Frankwill struggle to draw inferences as there is no disparate impact from any less favourable treatment may be inferred. The absence of any disparate impact does not mean that Frank could not have been directly discriminated against but without more his claim in unlikely to get anywhere. He does not have an explanation for the basis of the decision or the ethnic breakdown of its current workforce. He has limited information about Money Bank’s approach to equality. He can’t prove facts which in the absence of an explanation show prima facie discrimination so his claim fails.
- James‘s claim is unlikely to be rejected as quickly as Frank’s as the data doesn’t help prove or disprove his claim. James could try to rely on the absence of older workers in the workforce, any lack of training or monitoring and past claims if he had this information as well as the absence of an explanation for his rejection but, in reality, this claim looks pretty hopeless.
- Alicemay be on stronger ground. She can point to the disparate impact data as a ground for inferences but this will not normally be enough on its own to shift the burden of proof. Alice can also point to the opaque decision-making. Money Bank could rebut this if the decision was sufficiently ‘explainable’ so the reason for Alice’s rejection could be identified. However, it cannot do so here. The dangers of inexplicable decisions are obvious.
Conclusion
The case of Alice, Frank and James highlights the real challenges for claimants winning discrimination claims where AI solutions have been used in employment decision-making. The case also illustrates the risks and pitfalls for employers using such solutions. It illustrates how both existing data protection and equality laws are unsuited for regulating automated employment decisions. Looking forward, as the UK and other countries are debating over the appropriate level of regulation over AI in areas such as employment, it is to be hoped that these regulations recognise and embrace the inevitably of increased automation but, at the same time, ensure that individuals’ rights are protected effectively. Click Here for the Original Article Equal Pay Named in EEOC Targeted Priorities The Equal Employment Opportunity Commission (EEOC) has taken another step toward achieving its goal of equal pay and eliminating discrimination. EEOC objectives for fiscal years 2024 through 2028 are highlighted in its Strategic Enforcement Plan (SEP), released on September 21. And its message is uncompromising.EEOC priorities
The Commission’s clear focus is on combatting employment discrimination, promoting inclusive workplaces, and responding to racial and economic justice. To achieve this, it names six targeted subject matter priorities:- Eliminating barriers in recruitment and hiring.
- Protecting vulnerable workers from underserved communities from discrimination.
- Addressing selected emerging and developing issues.
- Advancing equal pay for all workers.
- Preserving access to the legal system.
- Preventing and remedying systemic harassment.
EEOC pushes for equal pay and workplace justice
Publication of EEOC priorities comes just weeks after announcing its alliance with the Department of Labor Wage and Hour Division. Aimed at enforcing “workplace justice issues” the alliance involves greater collaboration on employment-related matters and regulatory enforcement. As part of the joint Memorandum of Understanding, target areas for investigation and enforcement may include:- Employment discrimination based on race, color, religion, sex, national origin, age, disability, or genetic information.
- Unlawful compensation practices, such as violations of minimum wage, overtime pay or wage discrimination laws.
Introducing New York wage theft laws
Equal pay is also a targeted priority for New York. In an unprecedented step, New York Governor Kathy Hochul signed legislation which amends the state’s Penal Law and makes wage theft a form of grand larceny. Under state laws grand larceny is defined as any theft valued at $1,000 or more. The degrees of grand larceny relate to the value of property taken. Penalties increase from fourth degree larceny to first degree. New York Penal Law’s larceny statute describes wage theft as when a person is hired: “…to perform services and the person performs such services and the person [employer] does not pay wages, at the minimum wage rate and overtime . . . to said person for work performed.” In a prosecution: “…it is permissible to aggregate all nonpayments or underpayments ….into one larceny count, even if the nonpayments or underpayments occurred in multiple counties.” Employers who engage in wage theft will now be subject to criminal prosecution. The legislation came into immediate effect on September 6, 2023.The high cost of wage theft
In justification, the New York Senate notes that wage theft accounts for almost $1 billion in lost wages every year. That’s according to Cornell University’s Worker Institute. Wage theft in New York is pervasive across the state’s construction industry, which is expected to be a focal point for the New Act. Wage theft is also a significant problem across the US. The Economic Policy Institute reports over $3 billion was recovered for workers between 2017 to 2020. That’s just the tip of the wage theft iceberg that it estimated at $50 billion annually back in 2014. New York wage theft laws are its latest attempt to crack down on wage and hour law violations. Further, they come just over six months after Manhattan’s District Attorney, Alvin Bragg, Jr., announced the creation of a “Worker Protection Unit” to investigate and prosecute wage theft and other forms of worker exploitation and harassment. Earlier this year, New York City’s “NYC Bias Audit Law, ” also known as Local Law 144 also came into force. It requires employers to carry out “bias audits” of all automated employment decision tools (AEDTs) used in the hiring process and internal promotions.EEOC increases pressure on employers to ensure equal pay
It is highly likely that the EEOC will support its goal of equal pay by reinstating EEOC-1 Component 2, especially as pay equity is prioritized in its Equity Action Plan. Like the SEP, it aims to tackle systemic discrimination, advance equity, and better serve members of underserved communities. The EEOC’s message – and that of New York state – is clear. It’s time for employers to review their pay practices.Ensure compliance with EEOC priorities on equal pay
The EEOC states “pay inequity is not solely an issue of sex discrimination, but an intersectional issue that cuts across race, color, national origin, and other protected classes.” Intersectionality is key to achieving pay equity. It recognizes that individuals can experience discrimination based on the intersection of multiple identities. Our state-of-the-art pay equity software, PayParity, helps to solve HR’s most complex challenges around people, data, and compliance. It analyzes compensation through the intersection of gender, race/ethnicity, disability, age and more in a single statistical regression analysis. Working with a trusted pay equity software provider also ensures compliance with EEOC Title VII guidance, and complex pay transparency legislation. Click Here for the Original ArticleLet's start a conversation
At ClearStar, we are committed to your success. An important part of your employment screening program involves compliance with various laws and regulations, which is why we are providing information regarding screening requirements in certain countries, region, etc. While we are happy to provide you with this information, it is your responsibility to comply with applicable laws and to understand how such information pertains to your employment screening program. The foregoing information is not offered as legal advice but is instead offered for informational purposes. ClearStar is not a law firm and does not offer legal advice and this communication does not form an attorney client relationship. The foregoing information is therefore not intended as a substitute for the legal advice of a lawyer knowledgeable of the user’s individual circumstances or to provide legal advice. ClearStar makes no assurances regarding the accuracy, completeness, or utility of the information contained in this publication. Legislative, regulatory and case law developments regularly impact on general research and this area is evolving rapidly. ClearStar expressly disclaim any warranties or responsibility or damages associated with or arising out of the information provided herein.