February 2023 Screening Compliance Update


February 2023 Screening Compliance Update


ClearStar is happy to share the below industry related articles written by subject matter experts and published on the internet in order to assist you in establishing and keeping a compliant background screening program.


EEOC Hears Testimony Concerning Employment Discrimination in Artificial Intelligence and Automated Systems

On January 31, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) held a public hearing, titled, “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier,” to receive panelist testimony concerning the use of automated systems, including artificial intelligence, by employers in employment decisions. The hearing convened with statements by the four EEOC commissioners—Chair Charlotte A. Burrows, Vice Chair Jocelyn Samuels, Commissioner Keith E. Sonderling, and Commissioner Andrea R. Lucas—followed by panelist testimony that included prepared statements and question-and-answer periods with each commissioner. The panelists included higher education professors, nonprofit organization representatives, attorneys, and a workforce consultant.

The EEOC has invited the public to submit written comments on any issues discussed at the meeting through February 15, 2023. These comments will be reviewed and considered by the EEOC and its staff, who are working on these matters.

Panelist Concerns

The testimony addressed a number of shared concerns, though the panelists diverged in their recommendations about the role the EEOC should play to address them.

Critical evaluation of data. The testimony delivered to the EEOC consistently cited the importance of data to artificial intelligence. Concerns related to data include how its scope and quality can impact the individuals who may be selected or excluded by algorithm-based tools.

Validation and auditing. The role of auditing artificial intelligence tools for bias was a repeated concern raised by the panelists. Testimony debated whether audits should be required or recommended and whether they should be independent or self-conducted. Further, panelists questioned whether vendors should share liability related to the artificial intelligence tools they promote for commercial gain.

Transparency and trust. Multiple panelists raised concerns over the extent to which individuals subjected to artificial intelligence tools have any knowledge that such applications are being used. These concerns led the panelists to express doubt about how any individual with a disability affectable by artificial intelligence applications could know whether, when, and how to request an accommodation. Further, the panelists consistently shared as a priority that the EEOC support a system in which artificial intelligence is trustworthy in its applications.

Applicable or necessary laws. Testimony critiqued the application of traditional antidiscrimination analysis to the application of artificial intelligence as a hiring and screening tool. Although current disparate treatment analysis seeks to prohibit a decision-maker from considering race when selecting a candidate, panelists suggested that some consideration of race and other protected characteristics should be permitted as a strategy to de-bias automated systems to ensure an artificial intelligence model is fair to all groups. The panelists also addressed the applicability of the Uniform Guidelines on Employee Selection Procedures to automated decision tools and the potential for the use of analyses other than the “four-fifths rule” to evaluate the potential disparate impact of such tools.

Panelist Recommendations

Multiple panelists called for the EEOC to have a role in evaluating artificial intelligence applications for bias. Commissioner Sonderling suggested the EEOC consider taking an approach similar to the one taken by the U.S. Department of Agriculture, pursuant to which the agency would approve artificial intelligence products to certify their use. Other panelists urged the EEOC to issue guidance addressing compliance with Title VII of the Civil Rights Act of 1964 and the Age Discrimination in Employment Act when utilizing artificial intelligence tools and suggested that the EEOC work with other federal regulators to address the use of these tools.

Key Takeaways

The EEOC is likely to issue one or more additional publications following the hearing’s testimony to provide guidance for employers and individuals on the application of equal employment laws to artificial intelligence applications. The meeting was part of EEOC Chair Burrows’s Artificial Intelligence and Algorithmic Fairness Initiative. One of the stated goals of the initiative is to “[i]ssue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.” On May 12, 2022, the EEOC issued its first technical guidance under this initiative titled, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.” While this technical guidance focused on the application of the Americans with Disabilities Act (ADA) to artificial intelligence tools, the scope of the testimony at the hearing was significantly broader than this single law.

Further, the EEOC’s hearing took place as the New York City Department of Consumer and Worker Protection continued to develop enforcement regulations for the city’s automated employment decision tools law. New York City’s law is the first of its kind in the United States to impose a bias audit requirement on artificial intelligence applications. While future EEOC publications may address the role of a bias audit in employer decision-making tools, such an audit is unlikely to be required by the EEOC in the absence of a new federal law or a notice of proposed rulemaking.

Click Here for the Original Article

Congress Enacts “Fair Hiring in Banking” to Remove Certain Barriers to Employment in the Financial Services Industry

On December 23, 2022, President Biden signed the “James M. Inhofe National Defense Authorization Act for Fiscal Year 2023” which, among many other things, amends Section 19 of the Federal Deposit Insurance Act, 12 U.S.C. Section 1829 (“FDIA”), to reduce hiring barriers across the financial services sector.[i] Some bank associations supported the effort, stating:

The statute’s goal of ensuring a trustworthy, reliable banking workforce is important, particularly for an industry built on trust. However, safeguarding that trustworthiness should not come at the expense of offering hardworking people the chance to achieve meaningful employment opportunities in our nation’s banks. The participation of rehabilitated individuals with prior offenses in the banking industry would drive socioeconomic mobility and excluding those individuals would harm them while doing little to nothing to protect banks or their customers.

As a result of the amendments, the category of crimes for which a financial institution can outright reject a job applicant or terminate an employee has been significantly narrowed.

Section 19 in a Nutshell

Section 19 prohibits, absent prior written consent of the Federal Deposit Insurance Corporation (“FDIC”), a person convicted of a crime involving dishonesty, breach of trust, or money laundering from (broadly speaking) working for or otherwise participating, directly or indirectly, in the conduct of the affairs of a FDIC-covered financial institution. Section 19’s prohibition also covers anyone who has agreed to enter a pretrial diversion or similar program in connection with the prosecution of a crime involving dishonesty, breach of trust, or money laundering.

FDIC’s Statements of Policy and Final Rule

In 1998, the FDIC issued a Statement of Policy (“SOP”) (which was modified significantly in both 2012 and 2018), that provided guidance to the public regarding the application of Section 19.  The SOP set forth criteria for providing relief from Section 19 for individuals with convictions for certain low-risk crimes that constituted de minimis crimes, eliminating the need for an application for a waiver of Section 19.

On July 24, 2020, the FDIC issued a Final Rule to incorporate the SOP into the FDIC’s existing Procedure and Rules of Practice. The Final Rule:

  • Excluded all offenses that have been expunged or sealed from the scope of Section 19;
  • Increased the number of minor “de minimis” crimes on a criminal record to qualify for the de minimis exception from one to two;
  • Eliminated the five-year waiting period following a first de minimis conviction and established a three-year waiting period following a second de minimis conviction (or 18 months if the offense occurred when the person was 21 years of age or younger);
  • Increased the threshold for small-dollar, simple thefts from $500 to $1,000 (the same dollar threshold for bad or insufficient funds check offenses); and
  • Expanded the de minimis exception for crimes involving the use of fake identification to circumvent age-based restrictions from only alcohol-related crimes to any such crimes related to purchases, activities, or premises entry.

The “Fair Hiring in Banking” Amendments to Section 19

While the changes in the Final Rule were not major, the “Fair Hiring in Banking” significantly narrows the scope of crimes for which an application is required from the FDIC. Some provisions also codify language from the Final Rule (e.g., excluding all offenses that have been expunged or sealed if the intent of the criminal law is to render the record destroyed or sealed) and the process for waiver applications.

First, the amendment to Section 19 provides guidance to institutions in determining whether an offense is one of “dishonesty” by including a helpful definition of the term (although the amendment does not include a definition of “breach of trust”). Specifically, the term “criminal offense involving dishonesty” means an offense where the person, directly or indirectly, cheats or defrauds, or wrongfully takes property belonging to another in violation of a criminal statute. It also includes an offense that federal, state, or local law defines as “dishonest,” or for which dishonesty is an element of the offense. The term does not, however, include a misdemeanor criminal offense committed more than one year before the date on which a person files a waiver application, excluding any period of incarceration, or an offense involving the possession of controlled substances. (Note that Section 19 and the Final Rule address certain drug offenses. The takeaway here is that crimes of “dishonesty” do not cover drug possession offenses.)

Next, unless the conviction or program entry relates to an offense subject to the “minimum 10-year prohibition period” for certain offenses in 12 U.S.C. 1829(a)(2), an applicant or employee no longer needs a waiver application if:

  • It has been seven (7) years or more since the offense occurred or the person was incarcerated, and it has been five (5) years or more since the person was released from incarceration.
  • The person committed the offense before age 21 and it has been more than 30 months since the sentencing occurred.

Third, the provisions permit the FDIC to engage in rulemaking to expand the types of offenses that qualify as de minimis. Any such rule must:

  • Include a requirement that the offense was punishable by a term of three years or less (excluding periods of pre-trial detention and restrictions on location during probation and parole). Currently, under the Final Rule, it is one year.
  • For “bad check criteria,” require that the aggregate total face value of all insufficient funds checks across all convictions or program entries related to insufficient funds checks be $2,000 or less. Currently, under the Final Rule, it is $1,000.
  • Exclude certain lesser offenses (including the use of a fake identification, shoplifting, trespass, fare evasion, driving with an expired license or tag, and such other low-risk offenses as the FDIC may designate) if one year or more has passed since the applicable conviction or program entry.

Next Steps for Financial Institutions

FDIC-insured institutions should review their policies and practices to ensure consideration of Section 19 when assessing candidates’ conviction and program entry history. Convictions and program entries that are no longer automatically disqualifying under Section 19 should be evaluated under other state and local so-called “fair chance” or “ban the box” laws, along with the Equal Employment Opportunity Commission’s “Enforcement Guidance on the Consideration of Arrest and Conviction Records in Employment Decisions under Title VII of the Civil Rights Act.” Seyfarth Shaw will monitor the FDIC’s rulemaking and report on additional offenses qualifying as de minimis and any other issue the FDIC addresses in light of enactment of the Fair Hiring in Banking’s amendments.

[i] H.R. 7776 also includes similar amendments to Section 205(d) of the Federal Credit Union Act, 12 U.S.C. Section 1785(d), which contains hiring and employment restrictions similar to those in Section 19.

Click Here for the Original Article


New York City Delays Enforcement of its Artificial Intelligence Bias Audit in Employment Law as Rule-Making Continues

New York City (NYC) has delayed to April 15, 2023 the enforcement of its first-of-its-type law on bias in artificial intelligence (AI) tools used in employment. Local Law 144 of 2021 prohibits employers in NYC from using artificial intelligence (specifically referred to as “automated employment decision tools,” or AEDTs) to screen candidates for hiring or promotion unless the employers first conduct an audit to determine whether there is bias present in the tool. The audit must be conducted by an independent auditor that has no prior connection to either the AEDT or the employer or vendor. Employers must notify candidates that they use an AEDT, which qualifications the AEDT assesses, the types and sources of data the business collects for the AEDT, and its data retention policy, and must provide the candidates with an opportunity to request an alternative selection process or accommodation, if available. The employer must also publish the results of the bias audit on its website.

On September 19, 2022, the New York City Department of Consumer Affairs and Worker Protection (DCWP) issued proposed rules aimed at clarifying and expanding on the law. On December 23, 2022, DCWP released Revised Proposed Rules in response to the high volume of comments DCWP received on their proposal. The Revised Proposed Rules made some significant changes to the initial rule proposal. As of the date of this alert, the rules have not been finalized.

Under the Revised Proposed Rules, the AEDT audit must calculate the selection rate, based on gender and race/ethnicity, for each category in the Equal Employment Opportunity Commission Employer Information Report (EEO-1), including all possible intersections of gender, race, and ethnicity. The audit must compare the results of the selection rate to the most selected category to determine an impact ratio. The impact ratio is essentially a score intended to show whether the AEDT selects individuals from one or more races/ethnicities and/or genders at a statistically significant rate, which could imply that it is exhibiting bias based on a protected class. The calculations provided in the Revised Proposed Rules generally follow the widely accepted EEOC’s Uniform Guidelines on Employee Selection Procedures. However, unlike the EEO-1 reports, where employers can deduce gender, race, or ethnicity based on visual observation when an employee fails to provide the data, it is unclear how employers are expected to tackle missing data for purposes of the AEDT audit. Further, it is unclear how employers should address small data sets that could lead to skewed statistical analyses.

An independent auditor must conduct the bias audit, but multiple organizations can use the same bias audit, so long as each employer provides historical data (as defined in the rules) to the independent auditor. The AEDT vendor can hire an independent auditor to review its AEDT, and it can provide the audit to organizations that wish to use the tool.

The law was originally intended to go into effect on January 1, 2023, but enforcement has been delayed until April 15, 2023 as rule-making around the law continues. The Revised Proposed Rules, for instance, clarify some key terms, such as expounding on the definition of AEDT, which is currently defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” The Revised Proposed Rules explain that “to substantially assist or replace discretionary decision making” means

  • to rely solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered;
  • to use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set;
  • to use a simplified output to overrule conclusions derived from other factors including human decision-making.

Whether additional changes and clarifications will be made to the Revised Proposed Rules remains to be seen.


The NYC DCPW can enforce the law and issue fines between $500 and $1,500 per violation, per day. The law also provides a private right of action for employees and candidates.

Jurisdictional Reach

There are still open questions as to the jurisdictional reach of the law. Employers located within New York City must conduct bias audits of AEDTs and post the results of those audits to their websites. However, employers are only obligated to notify employees or candidates who reside in the city that the tool will be used in connection with their application. Employers must also inform NYC resident candidates as to the job qualifications and characteristics that the tool will be used to screen for, as well as the type and source of data they collect for use with the AEDT. Thus, it appears that the law would establish different obligations for employers depending on whether their potential candidates reside within or outside the city limits. It is also unclear whether the law applies to non-NYC employers who target NYC residents for hiring or promotion.

AI in Hiring

AI in hiring has progressed far past automated résumé keyword checks. While AI is widely used to sort and rank candidates based on the contents of their resumes, advanced AI interview software can assess intangible features like speech patterns, body language, and word choice. It can be used to judge candidates’ responses to emotional queues to determine whether they have the personality traits the employer views as valuable. Game-based AI tools test how a person plays a game and compares their behavior with how successful people at the company played the same game.

The idea is that the AI will learn which traits successful employees possess so it can identify similar traits in candidates for jobs and promotion. The concern is that it may identify the wrong traits, which can result in bias as to protected classes and perpetuate existing inequities. For instance, if AI used facial recognition to identify successful employees, and they all had blond hair, it could identify blond hair as a positive trait and filter on that basis.

Regulatory Trends

The Equal Employment Opportunity Commission launched an initiative in 2021 to examine how to prevent discrimination in AI, and held a hearing in January 2023 to examine how to prevent unlawful bias in the use of AEDTs. Further, the US Congress introduced the Algorithmic Accountability Act of 2022, which would direct the FTC to create regulations requiring companies to conduct impact assessments for systems that make automated decisions. NYC’s AEDT law comes on the heels of a clear increase in state regulation surrounding the use of AI in hiring. Illinois has enacted a law requiring employers to obtain informed consent from applicants whose video recordings are analyzed by AI for the purposes of hiring. The law attempts to guard against bias by requiring employers to collect and submit: “(1) the race and ethnicity of applicants who are and are not afforded the opportunity for an in-person interview after the use of artificial intelligence analysis; and (2) the race and ethnicity of applicants who are hired.” Maryland enacted a similar law in 2020. And the list continues to grow. For example, the California Privacy Rights Act, which took effect on January 1, 2023, and is still going through the rulemaking process, is expected to address AI and notice requirements. And, New Jersey has introduced a bill that is similar to the initial draft of NYC’s AEDT law.


The value of AI is that it can analyze massive datasets much faster than humans can, which allows it to consider many more factors, all at the same time, when making a decision. While AI tools for hiring could produce a significant cost and time savings for employers, the technology can also be a target for litigation based on several factors, including the potential for discrimination on the basis of race or gender. On the other hand, some claim that AI may actually reduce discrimination by removing unconscious human biases.

Organizations have several options to limit their potential exposure when using AI tools in the hiring process: independently test and monitor the AI tools on an ongoing basis and document that there is no material bias; require an AI vendor to conduct similar independent bias testing and to provide indemnity if their testing is wrong; shift liability to an insurer; or, in jurisdictions that do not regulate bias, accept the potential legal and reputational risks in light of the potential benefits.

NYC employers should carefully assess whether they use tools that could meet the NYC law’s definition of an AEDT, as its scope may be broader than one would expect. Under the definition above, widely used software that filters out resumes based on keywords could qualify as an AEDT.

Employers should stay attuned to this rapidly developing regulatory area, working with their trusted advisors to organize their data, their notice and their documentation processes; to inventory and audit their use of AEDTs; and incorporate counsel in the bias audit and governance processes to ensure attorney-client privilege protection where possible.

Click Here for the Original Article

Seattle City Council Approves First-in-the-U.S. Ban on Caste Discrimination

Seattle has become the first U.S. city to approve legislation amending city ordinances to prohibit discrimination on the basis of “caste,” including in the context of employment.  The proposals now go before the mayor for signature.

The legislative push was largely driven by discrimination-related concerns with respect to Seattle’s South Asian population.  The term “caste” is defined broadly for purposes of the amendments as “a system of rigid social stratification characterized by hereditary status, endogamy, and social barriers sanctioned by custom, law, or religion.”

Under the amendments, which are broadly written, it would be an unlawful discriminatory practice for Seattle employers to:

  • print, circulate, or cause to be printed, published, or circulated, any statement, advertisement, or publication relating to employment, or to use any form of application for employment, that indicates any preference, limitation, specification, or discrimination based upon caste; or
  • engage in any act, by itself or as part of a practice, and including harassment, that is intended to or results in different treatment or differentiates between or among individuals or groups of individuals by reason of caste.

It would also be unlawful under the amendments for any person to discriminate in a place of public accommodation (including retail establishments, restaurants, medical offices, entertainment venues, and more) by “harassing, intimidating, or otherwise abusing any person or person’s friends or associates” because of caste.

Employers and public accommodations with operations or personnel in Seattle should take note of this development and review their policies and practices for compliance.

Click Here for the Original Article

New Pay Transparency Laws Impact Multi-State Employers Nationwide

Pay transparency laws are proliferating across multiple U.S. states and localities. For example, employers with a single employee in Colorado, California, Washington, or New York City that post advertisements for jobs that could be performed in such jurisdictions—including remote jobs—are likely required to include a good faith range of pay for the job in the advertisements. Laws in these and other jurisdictions (including Connecticut, Maryland, and Nevada, and soon New York State) have other requirements as well, such as requiring employers to disclose pay ranges to job applicants upon request, prohibiting employers from asking job applicants about their salary history or retaliating against employees who discuss their own or other employees’ pay (such a rule also applies to all federal contractors). Pay transparency laws vary in scope, making compliance a challenge for multi-state employers, especially those with remote workers in multiple states.

The changing pay transparency landscape

Pay transparency laws now cover a significant proportion of employers in the United States. Previously, employers had no obligation to disclose pay information when advertising job opportunities, could ask applicants what they earned in their previous jobs during salary negotiations, and generally treated compensation information confidentially. In recent years, however, many states and localities have enacted laws banning employer inquiries about prior pay and prohibiting retaliation against employees who discuss their compensation or the compensation of others. Additionally, some states, such as California, Connecticut, Maryland, Nevada, and Rhode Island, require employers to provide compensation ranges to job applicants upon the applicant’s request or automatically when making a job offer.

Moreover, new pay transparency laws that go even further have popped up in Colorado, California, Washington, New York City, and elsewhere, and more are coming, including a New York State law slated to go into effect in September 2023. These laws are often written very broadly to cover an employer with as few as one employee in the jurisdiction and require employers to disclose specific types of information about compensation in job advertisements for any jobs that could even potentially be performed within the jurisdiction, either in-person or remotely. Some of the new laws require covered employers to disclose a good faith pay range for each advertised job, whereas others also require disclosure of information about other compensation and benefits, such as bonuses, commissions, and health insurance. Some of the new laws also require employers to disclose internal promotion and transfer opportunities, and the pay associated with these opportunities, to current employees. And some states (California and Illinois) require certain employers to submit their pay data to the state. Other pay transparency bills are pending in Massachusetts, Pennsylvania, and South Carolina, and more legislation is expected.

Consequences for employers

Employers with a either a physical presence or remote employees in any of these jurisdictions need to consider how the pay transparency laws apply to them. Failure to comply may result in an administrative enforcement action or private lawsuit. Some jurisdictions impose significant civil penalties for noncompliance with pay transparency requirements—for example, up to US$10,000 per violation in California and Colorado, and up to US$250,000 per violation in New York City.

Covered employers will need to establish and maintain pay ranges for their jobs if they have not already done so. Employers should consider the potential consequences of disclosing their pay ranges, which may include employee morale issues when some employees discover they are being paid at the low end (or below) a posted range, or potential pay equity lawsuits when employees believe pay differentials within the organization are due to a protected characteristic such as race, ethnicity, or sex.

Action items

Employers should consider taking the following steps to comply with pay transparency requirements:

  • Consult with counsel to determine if you are covered by one or more state or local pay transparency laws. Remember that you may have obligations under some of these laws even if you have only one remote employee working in the jurisdiction.
  • If you are covered, determine how you are going to comply. For example, are you going to comply only in specific jurisdictions where a pay transparency law applies to you, or will you take a uniform approach and disclose pay ranges in all job postings nationwide?
  • Decide whether to conduct an internal pay equity review or other evaluation of compensation to identify inconsistencies that may create employee morale issues or potential legal exposure.
  • Consider an external pay study, given that competitors will have increased visibility into your pay practices.

Click Here for the Original Article

Employer Alert: SB 731 Will Expand Sealing of Criminal Records

Beginning July 1, 2023, SB 731 will provide for the automatic sealing of certain felony criminal records.  Arrests that do not result in conviction will also be sealed. This law also permits individuals with violent or serious felony records to petition courts to order their criminal records sealed.  Sealing of these records will make them unavailable to most employers through a background search, although school districts may still access these records for teacher credentialing or employment decisions.

Under SB 731, most defendants convicted of a felony are eligible to have their records sealed if they have completed their sentence, along with parole and probation, and they haven’t been convicted of a new felony for 4 years. Those defendants with sex offender status are excluded.

Notably, Governor Newsom vetoed SB 1262, which would have facilitated criminal background searches by permitting searches of the California Superior Court online index using a defendant’s date of birth and/or driver’s license number. This law was proposed in response to the May 2021, ruling by the California Court of Appeals in All of Us or None – Riverside Chapter vs. W. Samuel Hamrick, that prohibited this practice. This case was later denied certiorari by the California Supreme Court. See our article here for more details.

Click Here for the Original Article

Employer Alert: SB 731 Will Expand Sealing of Criminal Records

Beginning July 1, 2023, SB 731 will provide for the automatic sealing of certain felony criminal records.  Arrests that do not result in conviction will also be sealed. This law also permits individuals with violent or serious felony records to petition courts to order their criminal records sealed.  Sealing of these records will make them unavailable to most employers through a background search, although school districts may still access these records for teacher credentialing or employment decisions.

Under SB 731, most defendants convicted of a felony are eligible to have their records sealed if they have completed their sentence, along with parole and probation, and they haven’t been convicted of a new felony for 4 years. Those defendants with sex offender status are excluded.

Notably, Governor Newsom vetoed SB 1262, which would have facilitated criminal background searches by permitting searches of the California Superior Court online index using a defendant’s date of birth and/or driver’s license number. This law was proposed in response to the May 2021, ruling by the California Court of Appeals in All of Us or None – Riverside Chapter vs. W. Samuel Hamrick, that prohibited this practice. This case was later denied certiorari by the California Supreme Court. See our article here for more details.

Click Here for the Original Article


Employers Face Six-Year Statute of Limitations for Criminal Background Check Claims

On Jan. 12, 2023, the U.S. District Court for the District of New Jersey held in Ramos v. WalMart, Inc. that Pennsylvania plaintiffs have up to six years to file claims against employers for improper use of criminal history under Pennsylvania’s Criminal History Record Information Act (CHRIA).

The plaintiffs alleged that Walmart violated the CHRIA by declining to hire them based on criminal history that was unrelated to their suitability for employment. They advocated for a six-year discovery period based on Pennsylvania’s six-year catchall statute of limitations, which is applicable to any action that is not “subject to another limitation … nor excluded from the application of a period of limitation[.]” 42 Pa. C.S.A. § 5527(b). Walmart resisted, arguing that Pennsylvania’s two-year statute of limitations for tortious conduct should apply.

The district court, predicting Pennsylvania law, agreed with the plaintiffs. It rejected Walmart’s arguments that the CHRIA exclusively concerned tortious conduct, finding that “plaintiffs may bring claims [under the CHRIA] analogous to various Pennsylvania common law causes of action, more than simply those sounding in tort.”

Although the court did not elaborate on what those Pennsylvania causes of action were, it cited Taha v. Bucks County Pennsylvania, in which the U.S. District Court for the Eastern District of Pennsylvania similarly applied a six-year statute of limitations. The Taha court reasoned that a broader catchall limitations period was appropriate based on the variety of requirements the CHRIA imposes on criminal justice agencies, licensing agencies, private employers and others.

While the case did not analyze the issue, it did serve as a reminder to employers that the CHRIA provides that “[f]elony and misdemeanor convictions may be considered by the employer only to the extent to which they relate to the applicant’s suitability for employment in the position for which he has applied.” 18 Pa. C.S.A. § 9125(b). The CHRIA does not specify how an employer should decide whether a conviction is “related” to an applicant’s suitability for employment. The CHRIA, like the federal Fair Credit Reporting Act, also includes procedural notification requirements if an employer makes an adverse employment decision based on a criminal background check.

Successful CHRIA plaintiffs can recover actual damages, with a statutory minimum of $100 per violation, plus attorneys’ fees, costs, and punitive damages of not less than $1,000 or more than $10,000 for willful violations. More generally, as evidenced by the Ramos complaint, CHRIA plaintiffs (and others alleging damages from employment denials due to criminal background checks) can seek to represent other aggrieved individuals through class cases.

Although the CHRIA was enacted more than 30 years ago, lawsuits were rare until fairly recently, when employer background check policies began to receive increased scrutiny from the Equal Employment Opportunity Commission and state legislators. The CHRIA’s broad language and substantial penalties make it a particularly popular target for plaintiffs’ lawyers — in the Taha case, for example, a jury awarded the class approximately $68 million. The Ramos court’s recent decision will only strengthen these incentives, and litigation in other states likely will continue to rise as well, given the patchwork of varying state and local restrictions regarding employer use of criminal history and credit reports. Employers should continue to monitor developments in this area and assess their background check processes accordingly.

Click Here for the Original Article

Illinois Supreme Court: 5-Year Statute of Limitations for BIPA Claims

Earlier today, the Illinois Supreme Court issued a decision in Tims v. Black Horse Carriers, Inc., 2023 IL 127801, in which the court held that a five-year statute of limitations applies to all claims arising under the Illinois Biometric Information Privacy Act, 740 ILCS 14/1, et seq. (BIPA). There are five primary sections under BIPA. Section 15(a) pertains to the establishment and maintenance of and adherence to a retention schedule and guidelines for destroying collected biometric information. Section 15(b) pertains to notice and written consent before collecting or storing biometric information. Section 15(c) pertains to selling or otherwise profiting from collected biometric information. Section 15(d) pertains to the disclosure or dissemination of biometric information without consent. Section 15(e) pertains to the proper storage and transmittal of collected biometric information.

In Tims, the plaintiff filed a class action lawsuit against his former employer, Black Horse, alleging that it violated BIPA in relation to its practices regarding the collection and use of the plaintiff’s fingerprint. Specifically, the plaintiff alleged that Black Horse (1) failed to institute, maintain and adhere to a publicly available biometric information retention and destruction policy under section 15(a); (2) failed to provide notice and to obtain his consent when collecting his biometrics under section 15(b); and (3) disclosed or otherwise disseminated his biometric information to third parties without consent under section 15(d).

Because the BIPA itself does not contain a statute of limitations, trial courts have been analyzing whether BIPA is governed by the one-year statute of limitations under 735 ILCS 5/13-201 (providing “actions for slander, libel, or for publication of matter violating the right of privacy, shall be commenced within one year next after the cause of action accrued”) or the five-year statute of limitations under 735 ILCS 5/13-205 (providing “actions on unwritten contracts, express or implied … and all civil actions not otherwise provided for, shall be commenced within 5 years next after the cause of action accrued”).

In Tims, the First District Appellate Court previously held that a one-year statute of limitations applies to claims under sections 15(c) and 15(d) of BIPA, but that the five-year statute of limitations applies to claims under sections 15(a), 15(b) and 15(e). In so doing, the court reasoned that the one-year statute of limitations codified in 735 ILCS 5/13-201 applies to section 15(c) and 15(d) claims because publication was an element of such claims, but the five-year statute of limitations under 735 ILCS 5/13-205 applies to sections 15(a), 15(b) and 15(e) because publication was not an element of such claims.

In reversing and remanding, the Illinois Supreme Court explained that it must consider more than the plain language of the statute and, instead, look to “purposes to be achieved” and to furthering the “goal of ensuring certainty and predictability in the administration of the limitations periods that apply to causes of actions under the Act.” Determining that only one statute of limitations should apply to BIPA and that the subsections under BIPA are “not otherwise provided for,” the court determined that “the Act is subject to the default five-year limitations period found in section 13-205 of the Code.”

Now, it remains to be determined when the five-year statute of limitations accrues. That issue is pending before the Illinois Supreme Court in Cothron v. White Castle System, Inc., No. 128004, which was argued in May 2022.

Click Here for the Original Article

California Unicorn: Court Issues Favorable Background Check Ruling for Employer in Class Action

Employers in California are frequently faced with class action lawsuits brought by current or former employees. Oftentimes, these actions are brought for alleged wage and hour violations, but we’ve seen an uptick in suits claiming violations of state and federal background check laws — even for minor technical errors. However, in an occurrence that may be as rare as a unicorn sighting, a California Court of Appeal recently delivered some good news for employers. The court affirmed a decision to dismiss a former employee’s class action claiming that a convenience store chain’s background check forms were defective. Notably, the court did not actually decide whether the forms were defective, and instead dismissed the case because the worker could not show he suffered an actual harm or injury. This potentially significant decision could curtail class actions that merely assert technical violations unaccompanied by any harm to consumers or employees. What do you need to know about the decision and its impact on employers?

What Happened Here?

As part of the hiring process for Circle K Stores, Ernesto Limon received a set of disclosure forms and authorized Circle K to conduct a background check. Limon was hired by Circle K with no issue regarding his background check. However, after his employment ended, he sued Circle K in federal court for alleged violations of the Fair Credit Reporting Act (FCRA), the federal law governing background checks.

The FCRA requires employers to provide a “clear and conspicuous disclosure” – which can’t be combined with other documents or contain extraneous information – before procuring a background check. Limon alleged that Circle K violated this provision. Nevertheless, the federal court dismissed his claims for lack of standing, holding that he failed to establish any actual harm as a result of Circle K’s alleged technical violations of the statute.

Limon re-filed his claims in state court, and the California trial court also dismissed the case on similar grounds. This decision was the first of its kind, as California state courts have historically (and frustratingly) permitted such claims to proceed based on statutory violations alone, even without the worker showing concrete harm or injury.

The state appellate court affirmed the trial court’s decision, holding that “an informational injury that causes no adverse effect is insufficient to confer standing upon a private litigant to sue” under California law. Considering the privacy goals of the FCRA, the court determined that holding Circle K liable for a violation that caused no injury would not align with the act’s purpose.

On January 25, the California Supreme Court rejected the plaintiff’s petition for review of the decision and his request to depublish it — which is another rare win for employers.

What Should Employers Do?

This decision provides California employers with a helpful defense against similar claims. Notably, such lawsuits are usually brought on a class-wide basis on behalf of all individuals who went through the company’s background screening process (including those who were never actually hired) — and the penalties sought range from $100 to $1,000 for each individual.

It is important to have your background check disclosure and authorization forms reviewed by legal counsel for compliance, even forms that are supplied by your background check vendor. This is the most effective way to avoid similar claims. In California, you’ll also need to comply with the state’s Fair Chance Act, as well as applicable local ordinances, which regulate inquiries into job applicants’ criminal history.

Moreover, multistate employers should note that laws on background checks vary significantly from state to state, and many involve more steps than what is required under FCRA. Therefore, you should carefully review the rules in the locations where your employees are located and coordinate with your workplace law counsel to make sure you have appropriate steps in place to comply.

Click Here for the Original Article

Ignoring a Failed Drug Test as a “Reasonable Accommodation?”

On September 21, 2022, the EEOC announced that it was suing a Florida senior living residence, and the entity which owned it, for disability discrimination. The EEOC stated that the facility, “Revoked Applicant’s Job Offer After Her Legally-Prescribed Medications Prevented Her From Passing a Required Drug Test.

I found the press release a bit sloppy and skimpy on the facts. During the interview process (alleges the EEOC), an applicant had told the employer that she was a veteran with PTSD, and that she took a prescribed medication for the PTSD, which medication (she reportedly told the employer during the interview, again according to the EEOC) would cause her to fail the required pre-employment drug test. The EEOC’s press release made the general statement that the defendants had violated the ADA’s obligation to make a reasonable accommodation to a person with a disability when they revoked an offer of employment based on a failure to have a negative pre-employment drug test.

This statement sent shivers down my spine – and questions racing through my head.  Seeking more information, I got a copy of the court-filed Complaint. As an attorney, I am aware that sometimes assertions made in court filings are less than accurate. For me, the Complaint clarified one very important point, and answered one critical question.

In Paragraph 73 of the Complaint, the EEOC identifies the “accommodation” that the applicant sought / that the EEOC asserted should have been made. The EEOC contends in its Complaint that the employer’s drug testing policy should be modified to provide the applicant an opportunity, “to show that non-negative results were due to legal, prescription medications.”

I practice labor and employment law in Iowa. Perhaps in recognition of the numerous and often complex provisions with which Iowa employers must comply, a couple of years ago, the Iowa Court of Appeals described the Iowa private drug testing statute as “byzantine.” While not disagreeing with the characterization, for applicants for employment, the Iowa drug testing statute has a few simple guideposts.

  • If there is testing, any positive drug test must be confirmed by a SAMSHA certified or IDPH approved laboratory.
  • The second, confirmatory drug test must use a different chemical process than was used in the initial screen, which “shall be a chromatographic technique such as gas chromatography/mass spectrometry, or another comparably reliable analytical method.”
  • Under Iowa law, the individual subject to testing must be given an opportunity to provide “any information which may be considered relevant” including prescription and nonprescription drugs.
  • The Iowa law requires that before the results are reported to an employer, a medical review officer (as defined in the Iowa law) must review and interpret not only the quantitative and qualitative test results, but also any information provided by the individual.

While Iowa’s statute does not provide any person or agency with substantive rule-making authority, the United States Department of Transportation has extensive regulations on drug and alcohol testing. Those regulations include 49 CFR Subpart G – Medical Review Officers and the Verification Process. Of particular interest to this subject matter is 49 CFR § 40.131(a)  – a detailed explanation of what a federal MRO is to do when it receives a “non-negative” drug test report:

(a) When, as the MRO, you receive a confirmed positive, adulterated, substituted, or invalid test result from the laboratory, you must contact the employee directly (i.e., actually talk to the employee), on a confidential basis, to determine whether the employee wants to discuss the test result. In making this contact, you must explain to the employee that, if he or she declines to discuss the result, you will verify the test as positive or as a refusal to test because of adulteration or substitution, as applicable.

I returned back to the EEOC’s federal court Complaint: I was seeking information on the “defendant employer’s” drug testing policy, and specifically seeking information on what exactly happened in that case.

The EEOC described the employer’s drug testing policy in Paragraphs 34-40 of the Complaint. Summarily, the employer uses on off-site third party to collect specimens and administer a “rapid response” drug test. I am reporting the allegations of the EEOC, and so acknowledge that the statements may not be accurate — although it is hard to believe that an attorney complying with Fed. R. Civ. P. 11 could get these wrong. The EEOC contends that the result of the “rapid response” test is available within 15 minutes; the result is either “negative” or “non-negative.” Again according to the Complaint, a “non-negative” result could mean not only that there is a positive for a drug or a drug metabolite, but also could mean that the sample is adulterated, substituted (?) or invalid. The EEOC states in the Complaint that all non-negative results are sent to a named-in-the-Complaint third-party laboratory (which, while not mentioned in the court-filed complaint, is a name I recognize as being SAMSHA-certified), “and examined by a Medical Review Officer to determine the cause of the non-negative, including whether the donor was taking illicit drugs or legally prescribed medication.”

I read that allegation of the Complaint, and then went back and re-read the allegation on what “accommodation” the EEOC asserts the employer should have made, and I scratched my almost bald head.

The quotation above is the only time “Medical Review Officer” (or MRO) appears in the Complaint.  This appears to be a (with apologies) HUUUUGE hole.

  • Did the third party not send the test results to the designated laboratory / MRO for review?  or
  • Did the MRO attempt to speak with the applicant / donor and she declined to speak with the MRO – – resulting in the laboratory test being reported to the employer?  or
  • Did the applicant provide the information to the MRO, but the “prescription medication” was for – say – marijuana?  See 49 CFR §40.137.  or
  • Given the focus of the EEOC on “non-negative” rather than “positive,” was there a problem with the specimen – inconsistent with human urine, adulterated, etc.? or
  • An MRO is to look at both the qualitative and the quantitative laboratory results:  A prescription is not a license to take as much as one wants.  One can have a prescription for a medication, but be taking an amount different than prescribed.  [49 CFR § 40.137 (e)(3):  “Use of the substance can form the basis of a legitimate medical explanation only if it is used consistently with its proper and intended medical purpose.”]

Because in the Complaint the EEOC describes a process which specifically includes an MRO review of the laboratory results, one wonders precisely what the EEOC is seeking when it asserts that the company it is suing should have “reasonably accommodated” the applicant by permitting her “to show that non-negative results were due to legal, prescription medications.”

I certainly hope that the EEOC is not asserting that the employer here should have relied on unqualified personnel – that is, someone other than a MRO – to make the determination that a “non-negative” drug or alcohol test result is “due to legal, prescription medications.”

Some current media are reporting on allegations of governmental agency over-reaching and abuse. One would think that if the EEOC had gone through the investigation and conciliation efforts asserted in the Complaint, the EEOC would be able to provide some information / allegation as to what happened at the “MRO review” process the EEOC itself alleges is part of the defendant employer’s drug testing protocol. Was the specimen inconsistent with human urine?  Was the amount found in the specimen inconsistent with what would be expected if the drug were taken in an amount prescribed? Was there a substance found in the specimen in addition to that which was prescribed? Did the applicant refuse to talk to the MRO at all?

Or did the MRO drop the ball and not make any call to the applicant, or to any donor?  Under this non-DOT drug testing policy, what is required – what is expected – of the “MRO”?

Employers are reminded of the importance of periodically checking that the drug testing policy the employer wrote (hopefully with the assistance of competent counsel) is being applied as written.

Click Here for the Original Article


‎‘Manifestly Inaccurate’ Content to be Removed from Search Results

An important decision recently handed down by the Court of Justice of the European Union (CJEU) concerning privacy rights and, in particular, the right to be forgotten is likely to have a considerable impact on the operations of search engine operators.


Following reference from the German Federal Court of Justice, the CJEU found that operators of search engines must de-reference information which is found to be manifestly inaccurate. However, there is no requirement on the individual requesting the removal to provide a judicial decision or order against the publisher of the offending website in order to qualify for its removal. A search engine can only make a reasonable request for information from the individual requesting de-referencing. The circumstances of each particular case will be taken into account in determining what is reasonable.

The case involved two managers in a group of companies. In 2015, three different articles were published on a website which criticised the investments by companies related to one of the managers. In one of the articles, photographs of that manager and the other manager living a very luxurious life were included. Both managers requested that Google, as the controller of personal data processed by its search engine:

  • De-reference the links to the at issue articles because they contain inaccurate claims and defamatory opinions, and
  • Remove photos displayed in the form of thumbnails from the list of results of an image search made on the basis of their names.

Requirement to de-reference alleged inaccurate claims

The CJEU found that on receiving a request for de-referencing, the search engine operator must determine if the link to the internet page following a search of the data subject’s name is necessary for exercising the right to freedom of information. This is a right internet users enjoy under Article 11 of the Charter of Fundamental Rights of the European Union.

As a general rule, a data subject’s right to respect for private and family life, and right to protection of personal data, overrides the legitimate interest of internet users who may want to access this information. However, that balance may depend on the relevant circumstances of each case, in particular:

  • The nature of the information
  • Its sensitivity for the data subject’s private life, and
  • The interest of the public in having that information

Where the data subject plays a role in public life, that person must display a greater degree of tolerance for their private life being public.

Evidence required

In considering a request for de-referencing, the obligation will be on the requesting individual to establish that there is a manifest inaccuracy in the content in question. At the very least, the requestor must show that the inaccurate part is not a minor part of the content as a whole. In determining this question, the CJEU also noted that to avoid imposing an excessive burden on the requestor, they only need provide evidence that can reasonably be required of them to establish the manifest inaccuracy. The circumstances of the particular case will need to be considered in determining what can reasonably be required. A judicial decision or order will not be essential to qualify for de-referencing.

As to what is required of search engine operators in these circumstances, operators will be required to take into account all of the rights and interests involved. However, they will not be required to engage in a fact-finding exercise, or to organise an adversarial debate to find missing information. Where an individual has provided sufficient evidence of inaccuracy, the request for de-referencing should be complied with.

The CJEU qualified its decision by saying that it will be considered disproportionate to de-reference content if only information of minor importance is inaccurate. In addition, where a search engine operator decides not to de-reference, the data subject must have the option to refer the matter to the supervisory authority (in Ireland, the Data Protection Commissioner) or the judicial authority (i.e. the Courts). If proceedings are brought to the attention of a search engine operator, a warning regarding the existence of proceedings must be added to the search results.


The second question referred to the CJEU in this case concerned whether, in the context of a request to de-reference images in the form of thumbnails which appear following an image search, the original context of the publication of those images must be conclusively taken into account.

In answering this question, the CJEU stated that the display, following a search by name, in the form of thumbnails of images of a data subject constitutes a particularly significant interference with that person’s right to privacy. In considering a request for dereferencing in this context, a search engine must identify whether displaying thumbnail images is necessary for exercising the right to freedom of information of internet users generally. If thumbnails contribute to a debate in the public interest, this will be considered an essential factor to be taken into account. In addition, account should be taken of the informative value of the photos regardless of the context of their publication on the internet page where they were taken from and any text element accompanying the photos in the search results which is capable of casting light on the informative value of the photos.


This is an interesting decision which should not be ignored by search engine operators, particularly in view of the fact that:

  • A court judgment or order is not required before a request to de-reference must be complied with.
  • The context of each request will need to be considered when balancing the rights of the data subject with the right to freedom of information.
  • Where the request for dereferencing has been denied by the search engine operator, the data subject may appeal the matter to the DPC and/or the Irish Courts. As a result, we may see increased litigation in this area.
  • Where there is ongoing litigation, search engine operators are required to include a notice next to such search results.

Click Here for the Original Article

UK Expands Guidance on the Supply of Professional and Business Services to Persons Connected with Russia

On February 7, 2023, the UK Department for International Trade (“DIT”) published an expanded version of its guidance on supplying professional and business services to a person connected with Russia (“DIT Guidance”), following a broadening of the types of services covered by the ban in December 2022. The DIT Guidance sets out additional details regarding the services falling within the scope of these sanctions, enforcement, applicable exceptions, and the licence application process. For more information on the ban on the supply of professional and business services, see our previous blog post (here).

The Professional and Business Services Ban

Since July 21, 2022, pursuant to The Russia (Sanctions) (EU Exit) (Amendment) (No. 14) Regulations 2022), any persons subject to UK sanctions jurisdiction, including UK parent companies with Russian subsidiaries, have been prohibited from directly or indirectly providing accounting, public relations, business and management consulting services to a person connected with Russia, absent an available exception or licence.

On September 30, 2022, the UK announced the expansion of the scope of the services ban to advertising, architectural, auditing, engineering, IT consultancy and design, and transactional legal advisory services. With the exception of the ban on transactional legal advisory services, which is expected to be legislated for in the coming months, the expansion of the services ban to these services came into force on December 16, 2022, pursuant to The Russia (Sanctions) (EU Exit) (Amendment) (No. 17) Regulations 2022.

For the purpose of the services ban, a “person connected with Russia” includes:

  • companies incorporated, or constituted, under Russian law (including subsidiaries of UK companies in Russia);
  • companies domiciled in Russia; and
  • individuals, or groups of individuals, who are located, or ordinarily resident, in Russia.

The DIT Guidance

The DIT Guidance provides additional information regarding the intended scope of the services ban with respect to advertising, architectural, auditing, engineering, and IT consultancy and design services, which are largely defined in Schedule 3J of The Russia (Sanctions) (EU Exit) Regulations 2019, as amended (“Russia Regulations”), by reference to certain codes from the 1991, 2002, and 2015 versions of the Central Product Classification (“CPC”), a classification system for goods and services promulgated by the United Nations Statistical Commission.

Advertising Services

The DIT Guidance explains that the CPC code for the direct sale of advertising time and space (except on commission) such as in newspapers, was a valuable product at the time the CPC was drafted and needed to be placed in a separate CPC category to services provided by advertising agencies and that distinction has been maintained in the UK’s definition of advertising services to ensure consistency with the CPC codes and align the UK with its international partners’ implementation of comparable restrictions.

Architectural and Engineering Services

These are individually prohibited services that are often intertwined in practice (e.g., large engineering firms often have in-house architectural services and vice-versa). The DIT Guidance states that, as a result, there is a strong rationale for banning the provision of both services at the same time. However, the DIT Guidance also states that these services should be treated as two individual restrictions for the purpose of the Russia Regulations.

The exceptions applicable to the two types of services also differ under the Russia Regulations. For example, there is an exception for providing engineering services in relation to the discharge of statutory obligations. A comparable exception does not exist for architectural services. This is because certain statutory obligations exist that require engineering services to be carried out before they can be fulfilled (e.g., Ministry of Transport tests (“MOTs”), building regulations, and environmental legislation).

Auditing Services

The prohibition on auditing services is separate, but complementary, to the pre-existing prohibition on the provision of accounting services. The DIT Guidance states that the UK has distinguished between these types of services because of the differences in their regulatory practices and legal obligations in the UK. Differing licensing grounds also apply to each of these services under the Russia Regulations.

When a part of a multinational group is a person connected with Russia, and another part is a UK company, the provision of auditing services from the UK to Russia generally is prohibited both directly and indirectly. The DIT Guidance explains that the effect of this prohibition on group audits is subject to two exceptions:

  • UK companies that are owned by a Russian parent company(as is defined in Section 1162 of the Companies Act 2006). In this scenario, UK companies are still permitted to be audited by a UK auditor in fulfilment of its legal obligations, irrespective of any indirect benefit to the Russian parent undertaking.
  • UK parent companies with a subsidiary company that is a person connected with Russia. In this scenario, when the UK parent company is a credit institution (i.e., it is in the business of taking deposits or other repayable funds from the public and granting credits for its own account), a UK auditor is permitted to provide services to the UK credit institution, for the purposes of the audit of the consolidated group account. When the UK parent company is not a credit institution, auditing services cannot be provided, which may impact the UK audit and need to be reflected in the final audit report.

The DIT Guidance also provides clarity regarding some common scenarios pertaining to the application of the auditing services ban, as follows:

  • when a UK auditor provides standardised material for the purpose of the audit of UK group accounts, and the group includes a Russian subsidiary, those services would generally not be considered to be within scope of the auditing services ban. However, if bespoke material is provided for the audit of the specific Russian subsidiary, this may qualify as the provision of auditing services to a person connected with Russia; and
  • when a UK company that is consolidating group accounts receives an audit report for a Russian subsidiary from a local Russian auditor, this is generally considered the receipt and not the delivery of a service and typically would not be considered within scope of the auditing services ban. However, if the firm wishes to undertake any activities beyond the receipt of the report (e.g., discussing the contents of the report with the Russian subsidiary or local auditor in Russia), it would be necessary to consider whether a person connected with Russia is a direct or indirect beneficiary of such an interaction.

IT Consultancy and Design Services

As a threshold matter, the IT consultancy and design services ban does not prohibit or target:

  • internet access, or the delivery of internet services, to Russian or other citizens (e.g., database services, content delivery networks, or Domain Name Services provision and support); or
  • services that support physical infrastructure of the internet (e.g., maintenance of office machinery, computers, software and related equipment).

The DIT Guidance also clarifies that, given the central importance of IT to the modern economy, the definition of “IT consultancy and design services” is not intended to cover the full breadth of activity relating to information technology. In particular, the DIT Guidance outlines some illustrative examples of activities that are not considered to fall within the scope of the restrictions, including:

  • civilian telecommunication services (i.e., an electronic communication network or electronic communications service that is used for civilian purposes (as defined by Section 32 of the Communications Act 2003));
  • services that are incidental to the exchange of communications over the internet, such as: (1) instant messaging; (2) videoconferencing; (3) chat and email; (4) social networking; (5) sharing of photos, movies, and documents; (6) web browsing; (7) blogging; (8) web hosting; and (9) domain name registration services;
  • service contracts that bundle advice with the design and development of an IT solution;
  • upgrades or updates that are required to ensure or restore civilian telecommunications and/or exchange of communications over the internet;
  • services that provide the storage of data, regardless of how this is delivered (e.g., though the cloud or other means); and
  • Virtual Private Network services.

The DIT Guidance states that upgrades or updates to hardware and software can only be applied under the IT consultancy and design services ban if they are required to ensure, or restore, civilian telecommunications and/or exchange of communications over the internet. Updates/upgrades cannot be applied if they improve performance beyond what is required to ensure civilian telecommunications and/or exchange of communications over the internet.

Wind-down Period

For sanctions related to accounting, public relations, and business and management consulting, a wind-down period for contracts concluded prior to the services ban coming into force expired in August 2022. However, the wind-down period for advertising, architectural, engineering, and IT consultancy and design services permits businesses to provide covered services in satisfaction of an obligation under a contract that was concluded before December 16, 2022, provided that the service is carried out on or before March 15, 2023, and the provision of the service is reported by the end of that day.

With respect to auditing services, the wind-down period permits the provision of covered audit services in satisfaction of an obligation arising from appointment as the auditor of a parent company, provided that:

  • for parent companies that are credit institutions, the auditing services are for one, or both, of the following purposes: (1) the parent company deciding whether accounts of a Russian subsidiary should be included in its consolidated group accounts and/or (2) the inclusion in its consolidated group accounts of the Russian subsidiary’s accounts; or
  • for other parent companies, the appointment as auditor occurred before December 16, 2022, the service is carried out on or before May 31, 2023, and the provision of the service is reported by the end of that day.

Reports should be made to the Secretary of State for International Trade by email at [email protected]. While there is no template reporting form, consideration could be given to identifying and providing details of the sanctioned services provided, recipient of the services, period of service provision, and purpose of service provision.

Exceptions to the Services Ban

Regulations 60DA61, and 63 of the Russia Regulations set out a range of exceptions that apply to the various services falling within the scope of the services ban in certain, defined circumstances. Activities falling within the scope of an exception in relation to a particular service apply automatically, provided that any conditions set out in the relevant exception are complied with.

Notably, unlike the EU and US, the UK has decided not to permit an exception for UK companies to continue providing covered services to their Russian subsidiaries. To permit the scrutiny and assessment of such activities on a case-by-case basis, the UK has instead decided to manage these activities via the licensing regime for the services ban.

Any entity wishing to continue providing covered services after the end of any applicable wind-down period will require a licence to do so. Licences do not have retrospective effect and activity that would violate the services bans should not be undertaken while waiting for a licence to be granted.


The licensing grounds available under the services ban are set out in the Statutory Guidance to the Russia Regulations and vary as between the different sanctioned services. Each licence application is assessed on a case-by-case basis and any licence that is granted will set out the terms and conditions that will apply to the continued provision of the particular service.

Applications for Standard Individual Export Licences (“SIELs”) under the available licensing grounds should be made to the Department for International Trade’s Export Control Joint Unit (“ECJU”) via SPIRE, the ECJU’s online export licencing system. Some questions on the SPIRE form may not be applicable to the provision of services (as opposed to the export of goods) and may be answered “N/A.” It also is not necessary to submit an end-user undertaking form with an application for a licence under the professional and business services ban. However, a cover letter should be submitted with the licence application that includes information on:

  • the activities the applicant wishes to carry out;
  • how the proposed activities fall within scope of the definition of the relevant prohibited service;
  • supporting evidence explaining why a licence should be granted, including details of the licencing grounds being relied upon;
  • any other relevant documentation; and
  • an explanation of how the activities to be carried out would be consistent with the aims of the UK’s Russia sanctions regime, including how the applicant would ensure compliance with other applicable sanctions measures, if relevant.

The licensing grounds in relation to the services ban include (1) divestment and wind-down of business operations in Russia and (2) the provision of services by a UK parent company to a Russian subsidiary. The DIT Guidance suggests that licence applications relating to these grounds may wish to include additional details regarding the proposed activities, as set out in the “Cover Letter” section of the DIT Guidance.

Importantly, licences that have been issued previously in relation to accounting, public relations, or business and management consulting services do not automatically extend to the provision of other services subject to the services ban. Such licences only authorise the provision of services specified within the particular licence. Consequently, a new licence application will need to be made in relation to the provision of each type of prohibited service that is not specified in the existing licence. The SPIRE reference number for any previous licence application, and any existing licence reference number(s), should be included as part of any such licence application, along with confirmation of whether any information provided in the previous licence applications has changed.

Click Here for the Original Article

British Columbia’s Privacy Regulator Issues New Privacy Breach Guidance: Here’s What You Need to Know

The Office of the Information and Privacy Commissioner for British Columbia (“BC OIPC”) recently released guidance setting out its recommendations for private organizations in British Columbia that experience a privacy breach (the “Guidance Document”). The BC OIPC’s release of recommendations in this regard is a notable development given that British Columbia is currently the only Canadian jurisdiction that does not have statutory breach reporting or notification requirements.

Guidance Document

In the Guidance Document, the BC OIPC regards any unauthorized access to or collection, use, disclosure, or disposal of personal information as a privacy breach. The BC OIPC considers activities to be “unauthorized” if they occur in contravention of British Columbia’s Personal Information Protection Act (“BC PIPA”). Examples of privacy breaches include the inadvertent sharing of personal information with the wrong person and theft of personal information under an organization’s control.

The Guidance Document outlines four key steps that organizations should take when responding to a privacy breach: (1) contain the breach; (2) evaluate the risks; (3) consider notifying affected individuals and other third parties; and (4) take appropriate go-forward preventative measures.

Step 1: Containing the Breach

Effectively containing a privacy breach will depend, in part, on how the breach occurred. For example, where a breach involves the unauthorized access to an organizations computer network, appropriate containment measures could include changing computer access codes, shutting down the server that was breached and adding additional digital or physical protective measures.

The BC OIPC also recommends that the organization promptly activate their breach management policy and take care not to destroy evidence that may be useful in identifying the cause of the incident. If an organization does not have a breach management policy in place, the BC OIPC recommends taking the following steps:

  1. appoint an individual to spearhead the initial investigation;
  2. inform the organization’s privacy officer and/or the person responsible for security as well as any other members who should know about the breach;
  3. determine whether a breach response team must be created; and
  4. if the breach involves criminal activity, notify law enforcement.

While not set out in the Guidance Document, it is important to engage external counsel early in the process to obtain advice on applicable legal and regulatory requirements in response to the breach and to otherwise manage associated legal risks.

Step 2: Evaluating the Risks

The Guidance Document recommends that organizations consider the following factors to evaluate the risks arising from a privacy breach:

  1. the personal information involved, including the sensitivity of such information and the potential for misuse;
  2. the cause and extent of the privacy breach, including the nature of the incident, continuing vulnerabilities, and whether compromised date was encrypted or otherwise not readily accessible;
  3. the individuals or others affected by the breach, including the number of such individuals and the organizations relationship to them; and
  4. the foreseeable harm to affected individuals, the public, and the organization itself that may arise from the breach.

Step 3: Notifying Third Parties

The Guidance Document encourages organizations to consider whether notification to affected individuals and other third parties may be appropriate in the circumstances. Notably, BC PIPA does not currently require private organizations to notify affected individuals or the BC OIPC when a privacy breach has occurred. As such, the decision to do so is voluntary. As noted above, it is important to seek legal advice to develop a breach response strategy that is appropriate in the circumstances.

The Guidance Document recommends consideration of the following factors to determine whether notification to affected parties is appropriate:

  1. whether legislation or contractual obligations require notification;
  2. whether there is a risk of identity theft, fraud, physical harm, or damage to reputation; and
  3. whether there is a risk of loss of business or employment opportunities or loss of confidence in the organization.

If an organization determines that notification is appropriate, the Guidance Document recommends doing so as soon as possible unless notification would impede an ongoing criminal investigation. The Guidance Document recommends that such notification be made directly to affected individuals and include: (i) the name of the organization; (ii) the date the organization was made aware of the breach; (iii) a description of the breach and potential harms; (iv) a description of the personal information involved; (v) the steps taken to control or reduce potential harms; (vi) steps the individual can take to further mitigate the risk of harm; (vii) contact information for an individual within the organization who can answer questions or provide further information; and (viii) a statement that the organization has notified the BC OIPC (if applicable).

The Guidance Document also encourages organizations to consider whether to inform other third parties or authorities of the privacy breach (i.e. law enforcement, insurers, regulatory bodies, other affected parties and the BC OIPC).

Step 4: Preventing Future Breaches

Once the risks associated with the breach have been mitigated, the Guidance Document recommends that organizations investigate the cause of the breach and determine what is needed to prevent a similar incident from occurring again. In this regard, the BC OIPC recommends that organizations review and update current policies and continue to do so regularly.

Key Takeaway

British Columbia currently stands on its own as the only Canadian jurisdiction that does not have statutory breach reporting or notification requirements. Yet, the BC OIPC’s recommendations in the Guidance Document acknowledge that notification to individuals can be a useful tool to mitigate harm to an individual whose personal information has been inappropriately collected, used or disclosed, and that notification to the BC OIPC may be appropriate in certain circumstances.

Click Here for the Original Article


‎The Importance of Incorporating Fair Chance Hiring into Your Business

There are many facets to building a culture of diversity, equity, and inclusion (DEI) in the workplace that some businesses tend to overlook when enacting these types of programs. Many DEI resources discuss the purposeful inclusion of those with different ethnic or religious backgrounds, genders, and abilities. With that, however, it’s also critical to consider and incorporate the facet of Fair Chance Hiring.

What is Fair Chance Hiring?

Fair Chance Hiring originated from the Fair Chance Act at both the state and federal level which says that when you have a candidate who applies for a job, you are not supposed to ask them about any particular arrest or conviction history before you make them a conditional offer of employment. The purpose of this is to eliminate a sort of “second tier” group of citizens we’ve created in America for people who have arrest or conviction records.

As a result, there has been a movement over the last several years to urge employers to consider a candidates qualifications and skill sets rather than looking at a mistake they may have made in the past that resulted in a criminal record.

The benefits of Fair Chance Hiring in light of a labor shortage

Fair Chance Hiring is a hot topic across the country, seeing as there is a tremendous untapped talent pool and a significant labor shortage in America. To put it in perspective, there are approximately 70 million people who have some form of a criminal record, which equates to about one third of the country’s workforce. Within that large group exists incredible talent — in fact, there are people in this country who are executives at major Fortune 500 companies who have suffered felony convictions, which means there is also great entry level talent within this group.

When you provide the opportunity for these highly marginalized groups to thrive in a middle-class economy and pave a way in livable wage jobs, it provides them with extra incentive to desire to perform better, remain loyal, and stay with the company longer.

Additionally, it might be plausible to assume that employees who came from Standford, or Cal Berkeley, or other premier levels would tend to promote faster, but on the contrary, those who have been involved with the criminal justice system actually promote faster when given the opportunity. Research based on 1.3 million United States military enlistees shows that those with criminal records were promoted more quickly and to higher ranks than other enlistees and had the same attrition rates due to poor performance as their peers without records.

If you practice DEI and it’s important to you and your company’s mission, it’s important to have justice-impacted people as the baseline for your inclusion strategy.

What is Ban the Box?

Ban the Box is a movement by civil rights groups who require employers to remove questions related to criminal history from their job applications. The “box” is what previously convicted job seekers face when being asked to disclose that information on their applications. According to the NAACP, more than 25 states and 150 local areas have adopted Ban the Box laws and policies.

One study from Johns Hopkins Hospital found that Ban the Box policies on initial applications and making hiring decisions based on merit showed that hired applicants with criminal records had a lower turnover rate than those with no records.

The DEI sector in corporate America focuses largely on advocating for the representation of marginalized groups including people of color, those who identify as LGBTQ+, people with disabilities, and women. However, we need to consider why formerly convicted applicants are not being advocated for at the same level of priority. There are many common stereotypes of those with former convictions that DEI initiatives can help to break.

Click Here for the Original Article

Changing Rules Governing Artificial Intelligence in Employment

In recent years, employers have begun to use artificial intelligence, machine learning, algorithms, and other automated systems or technologies (“AI Technologies”) to recruit and hire employees and make other employment decisions. At the same time, federal regulatory agencies and a number of localities have enacted new rules targeting how employers may use AI Technologies. As further discussed below, employers should expect increased enforcement efforts of federal, state, and local agencies (as well as private litigation) regarding their use of AI Technologies.

Most recently, on January 10, 2023, the Equal Employment Opportunity Commission (“EEOC”) issued a Draft Strategic Enforcement Plan (“SEP”) that places employment discrimination in the use of AI Technologies at the top of its strategic priorities list. Specifically, the SEP indicates the EEOC will make a concerted effort to eliminate employment discrimination in “the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.”

The SEP indicates that the EEOC plans to take a “targeted approach” in enforcing employment discrimination laws through directed investigations and litigation to “positively influenc[e] employer practices and promot[e] legal compliance.” The EEOC specifically called attention to the “lack of diversity” in the construction and “high tech” industries, “growth industries,” and industries benefitting from substantial federal investments as “areas of particular concern.”

The EEOC has already increased investigation and litigation activity related to the use of AI Technologies. For example, in May 2022, the EEOC commenced its first lawsuit related to allegedly discriminatory use of AI Technologies by an employer. EEOC v. iTutorGroup, Inc., et al., Case No. 1:22-cv-02565 (E.D.N.Y.). The age discrimination lawsuit was filed in the United States District Court for the Eastern District of New York against three integrated companies providing English-language tutoring services for allegedly programming their tutor application software to automatically reject older applicants. In the lawsuit, the EEOC is seeking back pay and liquated damages for more than 200 applicants who were allegedly denied jobs under the defendants’ application program.

The final version of the SEP (which will be issued after a public meeting on January 31, 2023, and a public comment deadline of February 9, 2023) will provide further guidance regarding the EEOC’s approach to enforcing employment discrimination laws in the context of AI Technologies going forward.

While the EEOC’s draft SEP has affirmed its commitment to enforcing all employment discrimination laws implicated by the use of AI Technologies, disability discrimination laws remain a point of emphasis and pose unique obstacles to employers’ use of AI Technologies. In May 2022, the EEOC released guidance specifically addressing how the Americans with Disabilities Act (“ADA”) applies to the use of AI Technologies in recruiting applicants and making employment decisions. In that guidance, the EEOC specifically identified potential ADA violations where: (1) an employer using AI Technologies does not provide legally required reasonable accommodations to applicants and employees with disabilities to ensure fair assessment, (2) the AI Technologies used by the employer—intentionally or unintentionally—screen out individuals with disabilities who could perform the essential functions of a job with a reasonable accommodation, and (3) the AI Technologies used by the employer violate ADA restrictions on disability-related inquiries and medical examinations. The EEOC contends that employers are generally responsible for ADA violations caused by the use of AI Technologies even when such technologies are developed and administered by a third-party vendors.

To avoid potential ADA violations arising out of the use of AI Technologies, employers should consider taking steps such as: (1) providing advance notices advising applicants and employees of the use of AI Technologies (including information about what traits or characteristics are being measured by the technology and the methods used to do so); (2) obtaining consents from applicants and employees regarding such use of AI Technologies; (3) advising applicants and employees how to contact the employer if reasonable accommodations are needed; and (4) providing adequate reasonable accommodations to applicants and employees with disabilities. Due to the plethora of physical and/or mental disabilities that exist, any required reasonable accommodations must be specifically tailored to each individual and there is no one-size-fits-all approach to the use of AI Technologies that ensures compliance with disability discrimination laws.

In addition to federal laws and guidance regarding the use of AI Technologies in employment decisions, employers must be cognizant of developments in state and local laws on this topic. Most prominently, New York City passed a local law effective on January 1, 2023, that prohibits employers from using “automated employment decision tools” to screen candidates or employees for employment decisions unless the tool has been the subject of a “bias audit” not more than one year prior to the use of the tool, among other things. The law also requires employers using automated employment decision tools to provide certain notices to candidates and employees who reside in the city regarding the use of such tools. There are a number of open questions regarding the meaning and application of this law, which the New York City Department of Consumer and Worker Protection (“DCWP”) is currently attempting to clarify through a revised set of proposed rules.  The DCWP held a second public hearing regarding its proposed rules on January 23, 2023, and is delaying enforcement of the law until April 15, 2023.

Other states and local governments have (or may in the future implement) their own laws and regulations regarding the use of AI Technologies in employment matters, leading to complex and varied requirements for employers to ensure their use of AI Technologies is legally compliant in all jurisdictions in which they operate.

In 2023 employers should work with legal counsel to: (1) assess whether their current or prospective use of AI Technologies in employment matters complies with current legal requirements and guidance, (2) create or update legally compliant policies and procedures related to the use of AI Technologies in making employment decisions, (3) respond to and defend enforcement actions and private litigation related to the use of AI Technologies in employment matters, and (4) closely monitor further legal developments that are likely to come—federally and in state and local jurisdictions—related to the use of AI Technologies in employment matters.

Click Here for the Original Article

“AI Made Me Do It” No Defense When Using Automation in Employment

After the Equal Employment Opportunity Commission (“EEOC”) recently indicated it intends to increase scrutiny over employers’ use of artificial intelligence (“AI”) and machine learning in recruitment, hiring, and disciplinary decisions, employers are well advised to do the same. Automation in employment decisions usually goes one of two ways — it mitigates or increases bias. With 99% of Fortune 500 firms and 25% of small companies using some form of AI in their employment processes, employers of all sizes now face legal exposure that did not previously exist.

In its most recent public hearing, the EEOC hosted expert witness testimony on how AI may affect employer liability under the Americans With Disabilities Act (“ADA”), Title VII of the Civil Rights Act of 1964, and the Age Discrimination in Employment Act, among other civil rights laws. The hearing comes on the heels of the EEOC’s announcement that its enforcement priorities for 2023-2027 include AI in employment. This priority is unsurprising, considering the EEOC recently sued three companies for using online recruitment software that allegedly automatically rejected otherwise qualified candidates because of their age and gender. In that same month, the EEOC issued guidance on how AI can exclude disabled workers.

The potential implications of “AI in employment” are vast, but below are a few practices that, according to recent guidance, are likely to place employers in the EEOC’s crosshairs.

  • Implicit bias in – disparate impact out: “Facially neutral” criteria can operate to exclude certain protected classes. When neutral policies/criteria disparately impacts employees of a protected class, the risk for a legally viable claim is high. Here are a few EEOC-provided examples of how this plays out in the AI sphere.

Many employers regard gaps in employment as a “red flag” and could ask AI to de-prioritize applicants with gaps. The result would likely exclude women (due to parental leave) and individuals with disabilities.

An employer could ask AI to prioritize workers in the ZIP codes near the work site. However, because of redlining, the employer may unintentionally exclude applicants whose families were historically forced to reside in other ZIP codes due to their race.

Personality tests have grown in popularity, but if an employer asks AI to exclude applicants who do not “exhibit optimism,” the test could screen out an otherwise qualified applicant with Major Depressive Disorder, which would violate the ADA.

  • The AI as “decisionmaker” may be no defense in light of user preference adaptation: Human intervention is not a cure-all, and machine learning could institutionalize existing practices. Here are a few examples.

If an employer identifies a group of “good” employees and seeks to hire individuals who display the same traits, automated machine learning and user preference adaptation may result in outcome replication. Employers will end up with workforces identical to their current ones, which could not only stifle innovation brought by new perspectives and perpetuate underrepresentation of traditionally underrepresented groups, and it could give rise to legal liability.

If an HR representative reviews applications and gives the “thumbs up” or “thumbs down” rating, the machine will learn and adapt to meet that HR representative’s preferences. However, the EEOC’s focus on the role unconscious bias plays in most of these assessments, backed by the well-established fact that humans tend to prefer people who are “like” them, means that AI’s “learned” preferences opens the door to legal liability.

While the use of AI itself will not get an employer in trouble, the EEOC has made clear that it is incumbent upon employers to educate themselves on the risks and benefits of such use and the potential processes and outcomes tied to it. “AI made me do it” will not be an effective defense. Employers should be prepared for increased scrutiny if they use AI, as the EEOC has indicated it is considering workplace audits for such companies, like those used in pay equity, as part of its “crackdown” on the unintended consequences of AI use.

To say that AI and automated machine learning in employment is a nuanced topic is, quite frankly, an understatement. Technological advancement is ever evolving, and the EEOC guidance likely will lag far behind innovation. However, there can be no doubt that employers who use AI may attract the attention of the EEOC. Employers need to ensure they stay proactive to detect insidious legal risks, lest they find themselves the “test case” to develop new law. One of the best ways to stay ahead of the curve is speak with experienced Labor & Employment counsel.

Click Here for the Original Article

TransUnion In “Active Settlement Discussions” with CFPB and FTC Over Tenant Screening

On February 14, TransUnion filed its annual 10-K report pursuant to the Securities and Exchange Act. Under the section entitled “Risks Related to Laws, Regulations and Government Oversight,” the company disclosed that it was in “active settlement discussions” with the Consumer Financial Protection Bureau (CFPB) and Federal Trade Commission (FTC) over the alleged Fair Credit Reporting Act compliance lapses related to tenant screening.

In the filing, the company stated in March 2022, it received a Notice and Opportunity to Respond and Advise (NORA) letter from the CFPB alleging that TransUnion and its tenant and employment screening business TransUnion Rental Screening Solutions, Inc. failed to: “(i) follow reasonable procedures to ensure maximum possible accuracy of information in consumer reports and (ii) disclose to consumers the sources of such information.” On July 27, 2022, the CFPB advised the company that it had obtained authority to pursue a joint enforcement action with the FTC.

While TransUnion is actively trying to settle this matter, it disclosed that if negotiations were not successful the company expected the CFPB and FTC to pursue litigation and seek “redress, civil monetary penalties and injunctive relief.”

To conclude that section of its filing, TransUnion also noted that recently the credit reporting industry had been subjected to heightened scrutiny. “Based in part on public comments from CFPB officials, we believe this trend is likely to continue and could result in more legislative and regulatory scrutiny of the practices of our industry and additional regulatory enforcement actions and litigation … .”

As we reported here, on November 15, 2022, the CFPB issued two reports highlighting what it perceived to be errors that frequently occur in tenant background checks and the impact the CFPB believed those errors could have on potential renters. In the press release accompanying those reports, the CFPB stated, “[t]he reports describe how errors in these background checks contribute to higher costs and barriers to quality rental housing. Too often, these background checks — which purport to contain valuable tenant background information — are filled with largely unvalidated information of uncertain accuracy or predictive value. While renters bear the costs of errors and false information in these reports, they have few avenues to make tenant screening companies fix their sloppy procedures.” As a result, the CFPB pledged, among other priorities, to continue to work closely with the FTC to take action on those issues. The NORA letter to TransUnion appears to be a continuation of that pledge and recent trend of cooperation between those two agencies.

Click Here for the Original Article

U.S. Supreme Court to Hear Fight over Consumer Watchdog Agency’s Funding

The U.S. Supreme Court on Monday agreed to decide whether the Consumer Financial Protection Bureau’s funding structure established by Congress violates the U.S. Constitution in a case that President Joe Biden’s administration has said threatens the agency’s ability to function and risks market disruption.

The justices took up the CFPB’s appeal of a lower court’s ruling in a lawsuit brought by trade groups representing the payday loan industry that the agency’s funding mechanism violated a constitutional provision giving lawmakers the power of the purse. The agency, which enforces consumer financial laws, draws money each year from U.S. Federal Reserve earnings rather than budgets passed by Congress.

The case is the latest to come before the Supreme Court seeking to rein in the authority of federal agencies. The court’s 6-3 conservative majority has signaled skepticism toward expansive regulatory power in rulings in recent years including one in 2022 that limited the Environmental Protection Agency’s authority to issue sweeping regulations to reduce carbon emissions from power plants.

A CFPB spokesperson, welcoming the court’s decision to hear the appeal, said there is nothing “novel or unusual” about the agency’s funding structure.

“As it did for the Federal Reserve Board and other federal banking regulators, Congress authorized the CFPB’s funding through legislation other than annual spending bills,” the spokesperson said. “This type of funding is a vital part of the nation’s financial regulatory system, providing stability and continuity for the agencies and the system as a whole.”

The justices will hear the case during the court’s next term, which begins in October. Biden’s administration had asked that the case be heard in the court’s current term.

The CFPB was created by Democratic-led Congress in 2010, following the 2008 financial crisis, as part of a federal law called the Dodd–Frank Wall Street Reform and Consumer Protection Act.

“Despite years of desperate attacks from Republicans and corporate lobbyists, the constitutionality of the CFPB and its funding structure have been upheld time and time again,” said Democratic U.S. Senator Elizabeth Warren, who championed the agency’s formation and is a staunch defender of its mission. “If the Supreme Court follows more than a century of law and historical precedent, it will strike down the 5th Circuit’s decision before it throws our financial markets and economy into chaos.”

A lawyer for the trade groups did not immediately respond to a request for comment.

The Community Financial Services Association of America and the Consumer Service Alliance of Texas sued in 2018. They argued that the CFPB’s “perpetual budget” was improperly exempted from congressional supervision, violating the constitutional principle of separation of powers among the U.S. government’s executive, legislative and judicial branches.

The lawsuit also took aim at a CFPB regulation designed to curb “unfair” and “abusive” payday lending practices. The 2017 rule barred lenders from trying to withdraw loan repayments from a borrower’s bank account after two consecutive attempts failed due to insufficient funds unless the consumer consented.

A federal judge in 2021 sided with the agency. But the New Orleans-based 5th U.S. Circuit Court of Appeals last October ruled that the funding structure violated the Constitution’s “appropriations clause,” which vests spending authority in Congress. The decision, by a panel of three judges appointed by Republican then-President Donald Trump, also vacated the 2017 regulation.

Biden’s administration told the Supreme Court that the CFPB’s funding structure devised by Congress – providing that a fixed amount go to the agency each year – was effectively “a standing, capped lump-sum appropriation.” The Fed last fiscal year transferred around $642 million to the agency.

The Supreme Court ruled 5-4 in another case in 2020 that legal restrictions on a president’s ability to fire the CFPB director without cause was an unconstitutional infringement upon presidential authority, though the justices stopped short of invalidating the agency.

The court heard arguments in November in two further cases involving agency power. The conservative justices in those cases appeared inclined to make it easier to challenge the regulatory power of agencies in disputes involving the Federal Trade Commission and the Securities and Exchange Commission.

Click Here for the Original Article


Let's start a conversation

    Nicolas Dufour - EVP and General Counsel, Corporate Secretary

    Nicolas Dufour serves as EVP, General Counsel, corporate secretary, privacy officer, and a member of the executive management team for ClearStar. He is proficient in the FCRA, GLBA, Privacy Shield, and GDPR compliance, as well as other data privacy regimes and publicly traded companies' governance. He is responsible for managing all legal functions to support the evolving needs of a fast-paced and rapidly changing industry. His position includes providing legal guidance and legal management best practices and operating standards related to the background screening industry, federal, state, and local laws and regulations, legal strategic matters, product development, and managing outside counsels. He represents the company in a broad range of corporate and commercial matters, including commercial transactions, M&A, licensing, regulatory compliance, litigation management, and corporate and board governance. He researches and evaluates all aspects of legal risks associated with growth in to different markets. He assists the management team in setting goals and objectives in the development, implementation, and marketing of new products and services. He advises and supports management, Board of Directors, and operating personnel on corporate governance, company policies, and regulatory compliance.

    At ClearStar, we are committed to your success. An important part of your employment screening program involves compliance with various laws and regulations, which is why we are providing information regarding screening requirements in certain countries, region, etc. While we are happy to provide you with this information, it is your responsibility to comply with applicable laws and to understand how such information pertains to your employment screening program. The foregoing information is not offered as legal advice but is instead offered for informational purposes. ClearStar is not a law firm and does not offer legal advice and this communication does not form an attorney client relationship. The foregoing information is therefore not intended as a substitute for the legal advice of a lawyer knowledgeable of the user’s individual circumstances or to provide legal advice. ClearStar makes no assurances regarding the accuracy, completeness, or utility of the information contained in this publication. Legislative, regulatory and case law developments regularly impact on general research and this area is evolving rapidly. ClearStar expressly disclaim any warranties or responsibility or damages associated with or arising out of the information provided herein.


    Bursa escort - eskort mersin - youtube seo - escort - eskort eskişehir