MAY 2022 SCREENING COMPLIANCE UPDATE
US Department of Justice Provides Guidance to Employers on Opioid Addiction and the ADA
In the context of the opioid crisis, the US DOJ has issued a Q&A guidance on how the Americans with Disabilities Act may apply to those in treatment for or recovery from opioid use disorder (OUD). The DOJ makes several points of significance to employers.
In the guidance, the DOJ asserts that drug addiction is a disability under the ADA, as long as the individual is not currently using illegal drugs. The ADA regulations define “current illegal use of drugs” as the “illegal use of drugs that occurred recently enough to justify a reasonable belief that a person’s drug use is current or that continuing use is a real and ongoing problem.”
What this actually means has been unclear, with some courts in the past taking the position that the employee must have completed treatment and been “clean” for some significant period of time.
However, the DOJ, as well as the Equal Employment Opportunity Commission, are taking a more aggressive approach to the definition. Thus, the DOJ states that, although those engaged in the current illegal use of drugs are not protected by the ADA, those in treatment or recovery from OUD are. More specifically:
- The ADA protects individuals who are taking legally prescribed medications for opioid use disorder (“MOUD,” i.e. methadone, buprenorphine, or naltrexone) under the supervision of a licensed health care professional to treat OUD.
- The ADA also protects individuals currently participating in a drug treatment program.
- Those with a history of past OUD are also protected by the ADA, since the ADA protects individuals with a “record of” disability.
- In addition, those who are “regarded as” having OUD are also protected by the ADA. The DOJ offers the example of an employer believing an employee has OUD because the employee uses opioids legally prescribed by her physician for pain. If the employer fired that employee, it would be a violation of the ADA.
- Employers may have a drug policy and conduct drug testing for opioids. However, an employee who tests positive because they are taking legally prescribed opioids may not be fired or denied employment based on their drug use, unless they cannot do the job safely and effectively or they are disqualified under another federal law (such as Department of Transportation regulations).
Notably, on this last point, although the DOJ guidance does not address it, the EEOC guidance on this topic for employees (there is no guidance for employers) makes clear that employees taking legally prescribed opioids may be entitled to a reasonable accommodation, if the medical condition causing the pain requiring the use of such medication constitutes a disability. Such accommodation could include allowing the use of opioid medications, although as noted above, such use cannot prevent the safe and effective performance of the job or violate some other law. But even in that case, employers may need to consider transferring the employee to an open position that would permit such use, if no other reasonable accommodation is available. Employees may also be entitled to a reasonable accommodation to avoid relapse, such as scheduling changes to allow the employee to attend a support group meeting or therapy session.
Thus, it is important that if an employee is taking MOUD or in other treatment, or if they are taking legally prescribed opioids, that the employer engage in the interactive discussion to ascertain if a reasonable accommodation is available. Additionally, as the EEOC notes, employees may also be entitled to take leave under the Family and Medical Leave Act for treatment or recovery.
Justice Department and EEOC Warn Against Disability Discrimination
Employers’ Use of Artificial Intelligence Tools Can Violate the Americans with Disabilities Act
The Department of Justice and the Equal Employment Opportunity Commission (EEOC) today each released a technical assistance document about disability discrimination when employers use artificial intelligence (AI) and other software tools to make employment decisions.
Employers increasingly use AI and other software tools to help them select new employees, monitor performance, and determine pay or promotions. Employers may give computer-based tests to applicants or use computer software to score applicants’ resumes. Many of these tools use algorithms or AI. These tools may result in unlawful discrimination against people with disabilities in violation of the Americans with Disabilities Act (ADA).
The Justice Department’s guidance document, Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring, provides a broad overview of rights and responsibilities in plain language, making it easily accessible to people without a legal or technical background. This document:
- Provides examples of the types of technological tools that employers are using;
- Clarifies that, when designing or choosing technological tools, employers must consider how their tools could impact different disabilities;
- Explains employers’ obligations under the ADA when using algorithmic decision-making tools, including when an employer must provide a reasonable accommodation; and
- Provides information for employees on what to do if they believe they have experienced discrimination.
The EEOC released a technical assistance document, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, focused on preventing discrimination against job seekers and employees with disabilities. Based on the ADA, regulations, and existing policy guidance, this document outlines issues that employers should consider to ensure that the use of software tools in employment does not disadvantage workers or applicants with disabilities in ways that violate the ADA. The document highlights promising practices to reduce the likelihood of disability discrimination. The EEOC technical assistance focuses on three primary concerns under the ADA:
- Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools;
- Without proper safeguards, workers with disabilities may be “screened out” from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation; and
- If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams.
“Algorithmic tools should not stand as a barrier for people with disabilities seeking access to jobs,” said Assistant Attorney General Kristen Clarke of the Justice Department’s Civil Rights Division. “This guidance will help the public understand how an employer’s use of such tools may violate the Americans with Disabilities Act, so that people with disabilities know their rights and employers can take action to avoid discrimination.”
“New technologies should not become new ways to discriminate. If employers are aware of the ways AI and other technologies can discriminate against persons with disabilities, they can take steps to prevent it,” said EEOC Chair Charlotte A. Burrows. “As a nation, we can come together to create workplaces where all employees are treated fairly. This new technical assistance document will help ensure that persons with disabilities are included in the employment opportunities of the future.”
The EEOC’s technical assistance document is part of its Artificial Intelligence and Algorithmic Fairness Initiative to ensure that the use of software, including artificial intelligence (AI), used in hiring and other employment decisions complies with the federal civil rights laws that the EEOC enforces. In addition to its technical assistance, the EEOC released a summary document providing tips for job applicants and employees.
CFPB issues new circular on application of ECOA adverse action notice requirements to credit decisions using algorithms
Under Director Chopra’s leadership, the CFPB has regularly been sounding alarms about the potential for discrimination arising from the use of so-called “black box” credit models that use algorithms or other artificial intelligence (AI) tools. Today, in the second of its recently-launched Consumer Financial Protection Circulars, the CFPB addresses ECOA adverse action notice requirements in connection with credit decisions based on algorithms.
The new Circular (2022-03) is intended to deliver the message that:
- ECOA/Regulation B adverse action notice requirements apply to all credit decisions, regardless of the technology used to make them.
- The fact that a creditor’s technology for evaluating applications is too complicated or opaque for it to understand is not a defense to noncompliance with adverse action notice requirements.
- It is a violation of the ECOA and Regulation B for a creditor to use algorithms or other technology if doing so makes a creditor unable to satisfy the requirement to provide a statement of reasons for adverse action that is “specific” and “indicate[s] the principal reason(s) for the adverse action.”
While the issuance of the new Circular allows the CFPB to issue a press release announcing that it is “act[ing] to protect the public,” the CFPB does not share any new information about the ECOA/Regulation B requirements in the Circular. Most notably absent from the Circular is any guidance that would assist creditors in meeting their compliance obligations. Indeed, in a July 2020 blog post, the CFPB acknowledged the challenges that the use of AI and algorithms creates for providing compliant ECOA adverse action notices.
In its press release about the new Circular, the CFPB renews its December 2021 call for tech workers to act as whistleblowers to report potential discrimination arising from the use of algorithms and other technologies. The CFPB has previously acknowledged the potential consumer benefits of AI and other technologies. We believe the CFPB could benefit both consumers and providers by providing guidance on how industry can best navigate the sometimes challenging compliance issues raised by new technologies.
In the absence of useful guidance from the Bureau, it is imperative that companies seek legal guidance so that they will be in a position to defend their approach to adverse action notices if challenged.
CFPB Releases Statement on Credit Reporting Companies’ & Furnishers’ Obligations Under FCRA
The CFPB recently issued a statement regarding credit reporting companies (i.e., credit reporting agencies or “CRAs”) and furnishers and their legal obligations under FCRA. Specifically, the CFPB stated that CRAs and furnishers are required to ensure that the information reported on consumers is both “legally” and “factually” accurate.
In two active cases, one against a CRA and one against a furnisher, the CFPB filed amicus briefs in support of the Bureau’s position that CRAs and furnishers must ensure reported consumer information does not contain “legal” or “factual” inaccuracies. In the case against the CRA, a consumer brought an action against the CRA for violating FCRA by allegedly failing to have reasonable procedures to assure the information on the consumer’s report was accurate. The consumer claims that the CRA reported on her credit report that she owed a certain amount on a car lease that she actually did not owe under the terms of her lease. The CRA argued that it did not err because the error was “legal” rather than “factual” in nature. The CRA also argued that the information was not inaccurate because it was provided by the company that financed the lease. In the amicus brief, which was filed jointly by the CFPB and FTC, the agencies stated that regardless of whether the error is “legal” or “factual,” the CRA needed to follow reasonable procedures to prevent inaccurate statements and noted that FCRA does not exempt “legal” inaccuracies.
In the other case, a consumer, who was a victim of identity fraud and had many accounts opened in her name, brought an action against a credit furnisher for reporting inaccurate information to CRAs even after the consumer disputed the information. The consumer alleges that after she disputed the information with the furnisher, the furnisher kept reporting the account at question to CRAs. The furnisher argued that the error was “legal” rather than “factual,” which should excuse the furnisher from liability. In its amicus brief, the CFPB noted that the furnisher had a duty under FCRA to reasonably investigate the disputed information, regardless of whether the dispute could be characterized as a “legal” or “factual” issue.
Twitter to Pay $150M for Violating 2011 FTC Order Regarding Misrepresentation of its Privacy and Security Practices
On May 25, 2022, the Federal Trade Commission (FTC) announced that it, along with the Department of Justice, fined Twitter $150 million for violating a 2011 agreement the company had with the Commission. Under the 2011 FTC order, Twitter agreed that it would protect the integrity of nonpublic consumer information, including users’ phone numbers and email addresses. According to federal investigators, Twitter broke this promise.
The FTC found that Twitter requested users’ email addresses and phone numbers under the guise of protecting their accounts as part of the “two-factor authentication” method used to provide users with an additional layer of security. But rather than limit the use of users’ data for this purpose, the FTC found that Twitter used the information it received from its users to increase the company’s own profits by allowing advertisers to use that data to target advertisements towards specific users. In the FTC’s announcement, FTC Chair Lina Khan stated, “This practice affected more than 140 million Twitter users, while boosting Twitter’s primary source of revenue.”
The order proposed several corrective provisions in addition to the $150 million fine. If adopted, the order will prohibit Twitter from profiting from the data obtained in violation of the 2011 order. The new order would also require Twitter to:
- Provide customers with alternative multi-factor authentication methods;
- Notify users that it misused nonpublic consumer information collected for account security to target ads to them;
- Implement and maintain a broad privacy and information security program;
- Limit employee access to users’ personal data; and
- Notify the FTC of any future data breaches.
To be sure, the FTC’s charge against Twitter is not surprising. The FTC previewed this issue back in October 2021 when it released findings from an FTC staff report on Internet Service Providers’ collection and use practices. The report found that even though ISPs “promise not to sell consumers personal data, they allow it to be used, transferred, and monetized by others.” The report concluded that the ISPs’ use and collection practices mirrored problems identified in other industries and emphasized the importance of regulating data collection and use. Websites using added protection to entice users to share more information should take heed to the order against Twitter. Additionally, as many tech companies have entered into consent orders with the FTC dealing with consumer protection issues, they must take appropriate measures to ensure that they are not violating those orders.
NYC Council Amends Law Requiring Disclosure of Salary Ranges in Job Postings, Delays Effective Date to November 1, 2022
On April 28, 2022, the New York City Council amended the city’s salary transparency law to delay its effective date from May 15, 2022 to November 1, 2022. As discussed in our previous advisories, the law amends the New York City Human Rights Law (NYCHRL) to require that employers disclose a compensation range for positions in all job postings, including postings for promotion or transfer opportunities for current employees.
Although the original law used the term “salary,” the amendment, as well as guidance issued in March by the New York City Commission on Human Rights (NYCCHR), states that the posting requirement applies to positions that are paid hourly. The range listed must be what the employer “in good faith believes” it would pay for the position at the time of the posting.
What Else Has Changed?
Application to Remote Positions
The law will still apply to remote work performed in New York City, as the amendment merely states that the law will not apply to “[p]ositions that cannot or will not be performed, at least in part, in the city of New York.” The previously issued NYCCHR guidance states that the law will apply to “positions that can or will be performed, in whole or in part, in New York City, whether from an office, in the field, or remotely from the employee’s home.”
This leaves open the possibility that the NYCCHR, the agency that will enforce the law, will determine that positions that are, or might be, filled by an employee working remotely from New York City are covered by the law, even if the employer does not maintain an office in the city and does not require the employee to work there.
Who Can Make a Claim?
Under the amended law, only current employees may bring an action against their employer for advertising a job, promotion, or transfer opportunity without posting a minimum and maximum hourly wage or annual salary. Individuals can file a complaint with the NYCCHR.
No Penalty for First Violation
Under the amended law, there will be no penalty for first violations if the employer corrects the violation within 30 days. However, an employer’s submission of proof that the violation was corrected “shall be deemed an admission of liability for all purposes.”
For additional violations, companies that fail to comply may have to pay monetary damages to individuals, as well as “civil penalties of up to $250,000.”
Items Not Included in the Amendment
The amendment does not include certain provisions that were proposed to the NYC Council on April 5, 2022. For example, the law will apply to all employers covered by the NYCHRL, with no exception for small employers. All employers that have four or more employees (or one or more domestic workers) are covered by the NYCHRL, but not all four employees have to be located in New York City.
According to guidance issued by the NYCCHR, this means that the law will apply to any company with at least one employee in New York City, as the guidance states: “owners and individual employers count towards the four employees. The four employees do not need to work in the same location, and they do not need to all work in New York City. As long as one of the employees works in New York City, the workplace is covered.”
The amendment approved by the NYC Council does not allow employers to avoid the salary range provision by posting a general notice that it is hiring, without reference to any particular position.
Employers now have more time to determine which job, transfer, and promotion postings must include a minimum and maximum level of compensation. Employers also have more time to decide whether to change their company’s current compensation structure before the information must be made public in job postings.
Washington Becomes Third Jurisdiction to Require Wage Disclosures in Job Postings
In an effort to close what is viewed as a persistent pay gap, Washington has amended its Equal Pay and Opportunities Act (EPOA) for the second time to require employers to include wage and benefit information in their job postings. This replaces the prior requirement that employers provide this information to applicants “upon request” after receiving a job offer. Washington is one of the first jurisdictions to require this information to be provided in job postings, following Colorado and New York City. Other jurisdictions have passed wage disclosure requirements and are likely to follow suit with respect to job postings.
Which employers must comply with the wage disclosure requirements?
The wage disclosure requirements apply to employers with 15 or more employees. The law does not specify whether it counts employees in Washington only for this purpose.
What are the wage disclosure requirements for applicants?
Employers must disclose in each posting for each job opening:
- The wage scale or salary range, and
- A general description of all benefits and other compensation to be offered.
A “posting” means any solicitation intended to recruit job applicants for a specific available position that includes qualifications for desired applicants. This includes direct recruitment by the employer or indirect recruitment through a third party. It includes any electronic or hard-copy postings.
What are the wage disclosure requirements for current employees?
For employees offered an internal transfer to a new position or a promotion, the employer must provide the wage scale or salary range for the new position upon request.
How have the requirements changed?
Washington initially amended the EPOA in 2019 to mandate wage disclosures, but these amendments required employers to provide certain information only upon request. For applicants, employers had to provide the minimum wage or salary for the position upon request after an initial job offer, instead of the new requirement to post wage and benefit information for all potential applicants. For employees offered a transfer or promotion, employers had to provide upon request the wage scale or salary range or the minimum wage or salary (if a wage scale or salary range did not exist); now employers do not have the option of providing just the minimum wage or salary.
What are the remedies for violations?
Individuals may either file a complaint with the Department of Labor & Industries (L&I) or file a lawsuit if they believe a violation of the law has occurred. Remedies may include actual damages, double statutory damages (or $5,000, whichever is greater), interest of one percent per month, and payment of costs and attorneys’ fees. L&I may order payment of civil penalties in response to employee complaints, ranging from $500 for a first violation to $1,000 or 10% of damages (whichever is greater) for a repeat violation. Recovery of wages and interest will be calculated back four years from the last violation.
When do these changes take effect?
Employers must comply with the new requirements starting January 1, 2023.
Maryland: Governor signs insurance data security act
The Maryland Governor signed, on 21 April 2022, Senate Bill (‘SB’) 207 for an Act relating to insurance data security to amend the Annotated Code of Maryland. In particular, SB 207 requires carriers to comply with data security provisions such as:
- Notifying the Maryland Insurance Commissioner on a form and in a manner approved by the same that a breach of the security of a system has occurred;
- Developing, implementing, and maintaining a comprehensive written information security program based on a carrier’s risk assessment;
- Provide its personnel with cybersecurity awareness training that is updated as necessary to reflect the carrier’s risk assessment; and
- Establish a written incident response plan designed to promptly respond to and recover from any cybersecurity event that compromises the confidentiality, integrity, and availability of non-public information in its possession.
As such, SB 207 will become effective on 1 October 2022.
Maryland: Governor Signs Bills on Student Data Privacy
Senate Bill (‘SB’) 325 and cross-filed House Bill (‘HB’) 769 for an Act Concerning Student Data Privacy were approved, on 21 April 2022, by the State Governor. In particular, the Act defines covered information to include education records, first and last name, biometric information, student identifiers, search activity, photo, online behaviour usage, and persistent unique identifiers.
In addition, the Act defines an operator as an individual or an entity who engages with institutions under the school’s official exception of the federal family educational rights and privacy act and is operating in accordance with a contract or an agreement with a public school or local school system in the state to provide an internet website, an online service, an online application, or a mobile application that, among others, processes the covered information.
Moreover, the Act establishes the Student Data Privacy Council whose mandate is to, among others:
- Study the development and implementation of the Student Data Privacy Act of 2015 to evaluate the impact of the Act on:
- The protection of covered information from unauthorised access, destruction, use, modification, or disclosure;
- The implementation and maintenance of reasonable security procedures and practices to protect covered information under the Act; and
- The implementation and maintenance of reasonable privacy controls to protect covered information under the Act;
- Review and analyse similar laws and best practices in other states; and
- Review and analyse developments in technologies as they may relate to student data privacy.
The Act will come into effect on 1 June 2022.
Cal/OSHA Approves Third and Final Readoption of COVID-19 Prevention Emergency Temporary Standards Through Year End
Since November 2020, California employers have struggled to comply with burdensome requirements under the Cal/OSHA COVID-19 Prevention Emergency Temporary Standards (ETS), which initially went into effect on November 30, 2020. The ETS were initially revised and readopted on June 17, 2021, with a second round of revisions and readoption on December 16, 2021. The Occupational Safety and Health Standards Board (Board) voted to adopt the third and final revision and readoption of the ETS (Third Revised ETS), which is expected to take effect on or before May 6 once the Office of Administrative Law reviews and files with the secretary of state. Under Governor Newsom’s Executive Order N-23-21, the Third Revised ETS may only extend through December 31. Upon expiration of the Third Revised ETS, the Board is expected to implement permanent standards going forward.
Summary of Key Changes
Although the Third Revised ETS reflects relatively fewer changes than previous readoptions, some of the changes are significant:
- Eliminates any distinction on the basis of vaccination status, including for enforcement of facial covering and testing requirements.
- Expands no-cost testing requirements to all symptomatic employees, regardless of vaccination status or whether COVID-19 exposure was allegedly work related.
- Allows employees to present self-administered and self-read tests when required.
- Creates limited exemptions to testing requirements for “returned cases,” or for those previously infected with COVID-19 and have returned to work for up to 90 days following an employee’s initial test or onset of symptoms.
- Eliminates physical distancing requirements (except during periods of major outbreak) and physical partition requirements.
- Defers to California Department of Public Health (CDPH) guidance for exclusion and return-to-work criteria for close contact with COVID-19.
- Revises return-to-work criteria, regardless of vaccination status.
- Eliminates cleaning and disinfecting procedures.
Further Explanation of Third Revised ETS
- Vaccination Status
- The Third Revised ETS eliminate the term “fully vaccinated,” and therefore no longer distinguish between employees on the basis of vaccination status.
- Face Coverings
- The Third Revised ETS essentially eliminate all face covering requirements regardless of vaccination status. However, face coverings are still required in the following scenarios:
- When required by the CDPH;
- For 10 days following an employee’s first positive test (for asymptomatic employees) or after an employee first develops symptoms; and
- For all indoor employees in the exposed group during an outbreak or major outbreak, or outdoor employees in the exposed group who cannot maintain physical distancing.
- The revised definition of “face covering” eliminates the previous “light test” requirement, which prohibited masks with fabrics that let light pass through.
- Employees who are exempt from wearing facial coverings (such as due to disability) are no longer required to socially distance, but they must continue to undergo weekly no-cost testing.
- The Third Revised ETS essentially eliminate all face covering requirements regardless of vaccination status. However, face coverings are still required in the following scenarios:
- Employers must provide both training and instruction to employees on proper wear and seal checks when providing respirators for voluntary use (whereas the previous standards only required instruction).
- The Third Revised ETS now use the term “infectious period” (instead of “high-risk exposure period”) when referring to the two-day period prior to an employee developing symptoms or testing positive. This change has no practical effect, but instead appears to be an attempt to align with CDPH regulations.
- The Third Revised ETS add a new term for a “returned case,” or any person who returns to work and does not develop COVID-19 symptoms for a period 90 days after the initial onset of COVID-19 symptoms, or after an initial positive test (for asymptomatic employees), unless another period of time is otherwise ordered by the CDPH.
- Physical Distancing
- The Third Revised ETS eliminate physical distancing requirements except during periods of outbreak and major outbreak for employees in the exposed group.
- The Third Revised ETS also eliminate all physical partition requirements, even where physical distancing cannot be maintained during periods of outbreak and major outbreak.
- Cleaning and Disinfecting Procedures
- The Third Revised ETS remove all cleaning and disinfecting procedures.
- Exclusion From the Workplace
- The Third Revised ETS eliminate all previous exclusion criteria for close contacts and instead defer to CDPH guidance, along with a new requirement for employers to develop, implement, and maintain policies to prevent transmission of COVID-19 by those with close COVID-19 contact.
- Return-to-Work Criteria
- The Third Revised ETS distinguish return-to-work criteria on the basis of an employee’s symptoms, as opposed to the previous standards, which largely focused on vaccination status.
- For COVID-19 cases whose symptoms are resolving or no symptoms:
- Five (5) days have passed from date that symptoms began or the employee’s initial positive test;
- Twenty-four (24) hours have passed since a fever has resolved without use of fever reducing medication;
- An employee presents a negative COVID-19 test collected on fifth day or later; and
- If the employee cannot test or the employer does not require a test, the employee must be excluded for 10 days.
- For COVID-19 cases whose symptoms are not resolving:
- Twenty-four (24) hours have passed since a fever has resolved without use of fever reducing medication; and
- Symptoms are resolving or 10 days have passed since symptoms first began.
- Following close contact with COVID-19:
- Defer to exclusion/return-to-work criteria ordered by local or state health officials.
- For COVID-19 cases whose symptoms are resolving or no symptoms:
- As noted above, employees must wear a face covering until 10 days have passed from either the first positive test (for asymptomatic employees) or from when symptoms first developed regardless of vaccination status or prior infection.
- The Third Revised ETS distinguish return-to-work criteria on the basis of an employee’s symptoms, as opposed to the previous standards, which largely focused on vaccination status.
- Employees may now present self-administered and self-read COVID-19 tests when required if accompanied by another means of independent verification of the results, such as a time-stamped photograph. Previously, employees could only provide negative results from a test that was not self-administered or self-read unless observed by the employer or authorized telehealth proctor.
- The Third Revised ETS further expand testing obligations by requiring no-cost testing during paid time to all symptomatic employees regardless of vaccination status and regardless of whether the employee claims that exposure was work related. Previously, employers were only required to provide no-cost testing to asymptomatic employees who were not fully vaccinated.
- Employers do not need to provide testing to “returned cases” (e.g., those who returned to work and do not develop COVID-19 symptoms for 90 days) following close contact with COVID-19 in the workplace, including during an outbreak. However, there is no such exception for “returned cases” during a major outbreak.
- Employers must provide testing to all employees in the exposed group during a major outbreak (whereas the previous standards only required employers to make testing available).
- Outbreak (e.g., three or more COVID-19 cases in an exposed group within a 14-day period).
- Employers must continue to provide no-cost testing to all employees in the exposed group, regardless of vaccination status, with an exception for “returned cases.” As a reminder, employers must provide testing immediately upon outbreak, then again one week later during an outbreak.
- Employers must now specifically provide testing to employees with close contact during an outbreak within three to five days following the close contact or exclude close contact employees from the workplace until the return-to-work criteria is satisfied.
- Major outbreak (e.g., 20 or more COVID-19 cases in an exposed group within a 30-day period).
- Employers must provide testing to employees with close contact during a major outbreak (no timeframe is indicated) or exclude those employees from the workplace until the return-to-work criteria is satisfied.
- Outbreak (e.g., three or more COVID-19 cases in an exposed group within a 14-day period).
Consistent with prior readoptions, Cal/OSHA is expected to update its FAQs and other resources to specifically address the changes in the Third Revised ETS. As such, California employers are encouraged to monitor the Cal/OSHA website for further updates, including any updated model COVID-19 Prevention Program. As with the prior readoptions, California employers should consult with legal counsel, while implementing updates to their COVID-19 practices, and assess potential overlap with applicable local ordinances or other legal compliance obligations.
Connecticut’s Privacy Law Signed by Governor
Connecticut Governor Ned Lamont signed the Personal Data Privacy and Online Monitoring Act (CPDPA) into law on May 10, 2022, making Connecticut the most recent state to pass its own privacy law in the absence of comprehensive federal privacy legislation. Connecticut follows in the steps of Nevada, California, Virginia, Colorado and Utah in enacting its own comprehensive privacy legislation, with more pending in various state legislatures.
The Connecticut law goes into effect on July 1, 2023, giving companies just over a year to determine whether it applies, and if so to take steps to comply. Luckily, many organizations have already put compliance programs in place for the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA), so adding some nuances from other state laws, including Connecticut, will not be as daunting as the first go-round with California’s law.
The CPDPA is designed to establish a framework for controlling and processing personal data. It:
- Sets responsibilities and privacy protection standards for data controllers;
- Gives consumers the right to access, correct, delete, and obtain a copy of personal data and to opt out of the processing or personal data for certain purposes (e.g., targeted advertising);
- Requires controllers to conduct data protection assessments;
- Authorizes the state attorney general to bring an action to enforce the bill’s requirements; and
- Deems violations to be Connecticut Unfair Trade Practices Act violations. https://cga.ct.gov/2022/ACT/PA/PDF/2022PA-00015-R00SB-00006-PA.PDF
The CPDPA applies to individuals and entities that conduct business in the state of Connecticut or target products or services to Connecticut residents and either: control or process personal data of at least 100,000 Connecticut consumers (except if the data is processed solely for completing a payment transaction) or control or process the personal data of at least 25,000 Connecticut consumers and derives more than 25 percent of their gross revenue from the sale of personal data. The application of the law is not tied to an actual gross revenue figure like the CCPA is ($25 million), which is an important distinction that may narrow its applicability to organizations.
The law does not apply to nonprofits, state and local governments, higher education institutions, or national securities associations registered under the Securities Exchange Act. Consistent with other state data privacy laws, it also exempts financial institutions and data subject to the Gramm-Leach-Bliley Act and covered entities and business associates subject to the Health Insurance Portability and Accountability Act (HIPAA).
The law excludes 16 different categories of data from its purview, including protected health information under HIPAA, information subject to the Fair Credit Reporting Act, employee and job applicant data, and information protected by the Family Educational Rights and Privacy Act.
A “consumer” is defined as a Connecticut resident, and excludes individuals “acting in a commercial or employment context,” also known as a business-to-business exception, which is consistent with other state privacy laws.
Connecticut consumers will have the right to opt out of the processing of their personal data for targeted advertising, the sale of their data, or profiling for automated decisions that produce legal or significant effects on the consumer. Entities subject to the law will have to provide “clear and conspicuous” links on their websites giving consumers the choice to opt-out of that type of processing and provide a universal opt-out preference signal by January 1, 2025. Consistent with other state privacy laws, the CPDPA contains an anti-discrimination clause. These requirements, along with those of the other state laws that go into effect in 2023, warrant another look at companies’ websites to see if they need to be updated.
The CPDPA requires controllers to limit:
- Collection of personal data to the minimum amount necessary for the purpose of the collection;
- Use of the personal data to only the purpose of the collection or as the consumer has authorized;
- Establish and implement data security practices to protect the data; and
- Obtain consent before processing sensitive data, including data of any individual under the age of 13, and follow the provisions of the Children’s Online Privacy Protection Act.
Controllers will be required to update their website and other Privacy notices to be transparent about the categories of data collected, the purpose of the collection, how consumers can exercise their rights under the law, including an active email address at which to contact the controller, what information is shared with third parties, and the categories of third parties with which the controller shares the information. In addition, a controller must disclose that it is selling personal data for targeted advertising and provide consumers with information on how they can opt-out of the sale of their information.
Also consistent with the other state data privacy laws, the CPDPA requires that data controllers enter into a written contract with data processors prior to disclosing the personal data, outlining specific instructions for the data processing and data security requirements for the protection of the personal data. This requires organizations to review third-party contracts to determine whether they are disclosing personal data to third parties, whether CPDPA applies and to amend contracts with those third parties, as appropriate.
Violation of the CPDPA may land companies in an enforcement action by the Connecticut Attorney General (AG), who can levy fines and penalties under the Connecticut Unfair Trade Practices Act. However, there is a grace period for enforcement actions until December 31, 2024, for the AG to provide organizations an opportunity to cure any alleged violations. Beginning on January 1, 2025, the AG has discretion to provide companies with that opportunity to cure and can look at the conduct of the organization during the cure period to determine fines and penalties.
Significantly, consistent with Colorado, Virginia, and Utah, but tacking away from California, the CPDPA is clear that the law does not provide a private right of action for consumers to seek damages against organizations for violation of the law. Jurisdiction for violations is solely with the AG 2023 will be a busy compliance year for state data privacy laws as laws in Virginia, Colorado, Utah, and now Connecticut will all go into effect. Now is the time to determine whether these new privacy laws apply to your organization and to start planning compliance obligations.
Illinois Equal Pay Act’s Certification Requirement Extended to More Employers
Illinois Governor J.B. Pritzker has signed into law an amendment to the Illinois Equal Pay Act (IEPA) requiring companies with 100 or more employees in Illinois to obtain an equal pay registration certificate from the Illinois Department of Labor (IDOL).
Previously, only companies with more than 100 employees were required to complete the IEPA registration certification.
This means that more employers must ensure compliance with IEPA’s substantial and, at times, confusing reporting requirements. For instance, under the law, there is no fixed deadline for certification (IDOL is assigning deadlines on a rolling basis). Further, questions remain on what analyses, if any, are necessary to support certification.
The IDOL has posted frequently asked questions about applying for an Equal Pay Registration Certificate. The IDOL has indicated it will be publishing proposed regulations on May 20, 2022. Jackson Lewis attorneys will provide an update when the proposed regulations are published.
Ex-felons don’t have right to explain records to employers – court
A U.S. appeals court on Tuesday said federal law does not require employers to give job applicants with criminal convictions a chance to explain their records before turning them away.
A unanimous three-judge panel of the 8th U.S. Circuit Court of Appeals said data-processing firm SC Data Center Inc did not violate the Fair Credit Reporting Act (FCRA) by pulling a job offer to Ria Schumacher after a background check revealed that she had been convicted of murder and armed robbery two decades earlier.
The court said that while the FCRA gives job applicants the right to dispute the accuracy of background checks with consumer reporting agencies, it does not entitle them to discuss their records directly with employers before they are denied jobs. The 9th Circuit came to the same conclusion in a 2020 case.
Schumacher was implicated in a murder case involving a drug deal gone awry in 1996, when she was a teenager, and was sentenced to 25 years in prison, Schumacher’s lawyers at Brown & Watkins did not immediately respond to a request for comment. Nor did lawyers at Michael Best who represent SC Data.
Schumacher in a 2016 lawsuit accused SC Data of violating the FCRA by rescinding her job offer before she had a chance to view her background check and discuss it with the company.
A federal judge in Missouri in 2019 denied SC Data’s motion to dismiss the claim. The judge said the FCRA’s requirement that employers provide prospective workers with copies of background checks would be meaningless if job applicants did not have a right to discuss the results.
According to court filings, she was released after serving 12 years.
But the 8th Circuit on Tuesday said nothing in the text of the law grants workers the right to provide context to employers about their criminal histories.
“While it is true that Schumacher did not receive a copy of her report prior to rescindment of the job offer, she has not claimed the report was inaccurate,” Circuit Judge Ralph Erickson wrote.
The panel included Circuit Judges Jane Kelly and Steven Grasz.
The case is Schumacher v. SC Data Center Inc, 8th U.S. Circuit Court of Appeals, No. 19-3266. https://www.reuters.com/legal/transactional/ex-felons-dont-have-right-explain-records-employers-court-2022-05-03/
Appeals Court opinion in Schumacher v. SC Data Center Inc, 8th U.S. Circuit Court of Appeals, No. 19-326 (May 3, 2022): https://ecf.ca8.uscourts.gov/opndir/22/05/193266P.pdf
CFPB Argues the FCRA Requires Furnishers to Investigate Legal Issues Raised in Consumer Disputes
On April 7, the Consumer Finance Protection Bureau (CFPB or Bureau) filed an amicus brief in an appeal, pending before the Court of Appeals for the Eleventh Circuit in which the Bureau argued that the Fair Credit Reporting Act (FCRA) does not exempt furnishers from investigating disputes based on legal questions as opposed to factual inaccuracies. Section 1681s-2(b)(1) of the FCRA states that a furnisher of consumer information must conduct an investigation of disputed information upon receiving notice from a consumer reporting agency (CRA) that the consumer has disputed the accuracy of the information. Many courts have interpreted this to require furnishers to reasonably investigate factual questions, but not disputed legal issues (e.g., whether a consumer is liable for a reported debt). By contrast, the CFPB’s brief asks the Eleventh Circuit to “clarify that furnishers are required to conduct reasonable investigations of both legal and factual questions posed in consumer disputes.”
The subject plaintiff allegedly suffered identity theft that she discovered in 2016. The identity thief — an employee of the plaintiff — opened a credit card in the plaintiff’s name, and over the course of several years, accumulated over $30,000 in debt, while also making some payments from business bank accounts controlled by the plaintiff. When the plaintiff became aware, she notified the issuing bank, and the account was closed. The employee was ultimately convicted of identity theft. The bank, however, continued to furnish information about the outstanding debt to the credit bureaus. The plaintiff filed multiple disputes with the credit bureaus regarding the debt, which were then transmitted to the bank. Although the bank acknowledged that the plaintiff’s employee had opened the credit card account without the plaintiff’s consent, it concluded that the plaintiff was nevertheless responsible for the debt due to her negligent supervision of her employee and failure to object to continuous payments from bank accounts controlled by the plaintiff. Thus, the bank “verified” the debt and continued to furnish the information.
After the plaintiff filed suit, the bank moved for summary judgment, asserting multiple arguments, including that the plaintiff’s dispute turned on the disputed legal question of the plaintiff’s liability for the account rather than on a factual inaccuracy, as well as that the FCRA does not impose a duty on furnishers to investigate the accuracy of legal questions raised in consumer disputes. The district court granted summary judgment to the bank, concluding that it had “conducted a reasonable investigation as required under the procedural requirements of the FCRA.” In reaching this conclusion, the district court described the investigation duties imposed on furnishers under the FCRA as “procedural” and “far afield” from legal “questions of liability under state-law principles of negligence, apparent authority, and related inquiries.” Citing the First Circuit’s decision in Chiang v. Verizon New England, Inc., 595 F.3d 26 (1st Cir. 2010), the district court concluded that “a consumer cannot prevail on an FCRA claim by raising disputed legal questions as part of the dispute process instead of pointing to factual inaccuracies contained within the credit report.”
On appeal, the CFPB filed an amicus brief, arguing that furnishers are statutorily obligated to investigate both legal and factual questions raised in consumer disputes. The CFPB’s brief acknowledges that several federal courts have distinguished between “factual” and “legal” questions in determining the obligation of CRAs to investigate disputes under 15 U.S.C. § 1681i and that other decisions, including Chiang and unpublished decisions of the Eleventh Circuit, likewise recognize such a distinction in the context of furnisher investigations under Section 1681s-2(b)(1). Nevertheless, the CFPB argues that these cases in the furnisher investigation context were “incorrectly decided” because the FCRA does not make any such distinction. The CFPB argues that unlike CRAs, furnishers are qualified and obligated to assess issues, such as whether a debt is actually due or collectible, and routinely do assess such issues. The CFPB also goes further and suggests that even in the context of CRA investigations under Section 1681i, a formal distinction between legal and factual investigations is inappropriate and argues that a CRA has a duty to conduct a “reasonable investigation” of a legal dispute even if it does not have a duty to provide a legal opinion on the merits of the dispute. Finally, the CFPB urges the court to reject a “formal distinction” between factual and legal investigations because of the practical difficulty in distinguishing between them.
The CFPB’s arguments urge a decision contrary to the decisions of several federal courts that have distinguished between legal and factual questions in the context of both CRA and furnisher investigations under the FCRA. If the Eleventh Circuit accepts these arguments, it would create a circuit split with the First Circuit, and it would create significant uncertainty for furnishers attempting to comply with the FCRA by placing upon them a more onerous obligation than other courts have adopted. Further, a decision accepting the CFPB’s arguments could draw into question the distinction between legal and factual issues in the context of CRA investigations under Section 1681i as well, creating an even deeper split from existing precedent. The amicus brief is another example of the CFPB’s recent efforts to shape the state of the law governing the consumer reporting industry through both rulemaking and litigation.
Privacy Tip #331 – ACLU Settles Facial Recognition Suit with Clearview AI
The American Civil Liberties Union (ACLU) filed suit against Clearview AI, Inc. (Clearview AI) in March 2020, alleging that it violated the Illinois Biometric Information Privacy Act (BIPA) by capturing and using billions of individuals’ faceprints without consent. The ACLU filed suit “on behalf of groups representing survivors of domestic violence and sexual assault, undocumented immigrants, current and former sex workers, and other vulnerable communities uniquely harmed by face recognition surveillance.”
According to the ACLU, as part of the settlement Clearview AI has agreed to implement certain processes so that it is “in alignment with BIPA.” Clearview AI has agreed to:
- Restrict the sale of its faceprint database across the United States;
- Be permanently banned, nationwide, from making its faceprint database available to most businesses and other private entities;
- Cease selling access to its database to any entity in Illinois, including law enforcement, for five years;
- Maintain an opt-out request form on its website;
- End its practice of offering free trial accounts to individual police officers; and continue to filter out photographs that were taken or uploaded in Illinois for the next five years.
Private Employer May Terminate Employee for Racially Insensitive Social Media Post
Last week the New Jersey Appellate Division affirmed the dismissal of a lawsuit by an employee who alleged she had been wrongfully terminated based on her controversial Facebook post. In so doing, the court held that the employee of a private business is not protected from termination under the U.S. and New Jersey constitutions when the employer is not a state actor. In an age in which many individuals believe they have the legal right to post on social media and that doing so is akin to speaking in the “town square,” the New Jersey court’s decision provides notable pushback on this view and follows the majority of courts that have addressed this issue.
In McVey v. Atlanticare Medical System, an employee prominently displayed her association as her company’s corporate director of customer service on her private Facebook account. During the height of the nationwide protests involving George Floyd’s murder, the employee posted on her Facebook account that she believed the phrase “Black Lives Matter” was racist, caused segregation, and that she “support[ed] all lives … as a nurse they all matter, and [she] d[id] not discriminate.” She also posted that she did not condone the rioting that had occurred following Floyd’s death.
McVey’s employer, a health care system, informed all employees, including the plaintiff, through its social media policy to be mindful of their social media posts because their social media activities had the potential to affect the employee’s job performance, the performance of others, the company’s brand and/or reputation, and the company’s business interests. The company also warned employees in its social media policy to be cautious about “topics that may be considered objectionable or inflammatory – such as politics and religion….” Upon learning of the employee’s Facebook post, the company informed the employee that it was conducting an investigation, and then terminated her employment very shortly thereafter.
The employee’s lawsuit against her employer was limited to one claim: that the termination of her employment violated a clear mandate of New Jersey public policy under the First Amendment and the New Jersey state constitution. In rejecting this position, the court held that a claim for wrongful termination of employment as against public policy cannot be based on constitutional free speech rights where the employer is a private business. Constitutional rights can be violated only if there is state action. The court also confirmed that the employer had the right to terminate its at-will employee, particularly given the identification on the employee’s Facebook account of her status as an employee of the company, and the potential risk her inflammatory post could harm the company’s business.
Because the employee’s lawsuit was limited to her claim that the company had violated her federal and state constitutional rights, the court’s decision did not address the other instances in which employees may still have legal protection for their social media posts, including under the National Labor Relations Act. There are also instances in which employees have the right to speak out on social media when they believe their employer is violating antidiscrimination laws, wage and hour laws, and the like. As such, employers should not construe the McVey case as a blanket endorsement of a private employer’s right to terminate employees for their social media posts in all instances. Each situation will need to be assessed on a case-by-case basis depending upon the language in the post and any public policy considerations at issue.
Court Allows Leeway to Employers Under Fair Credit Reporting Act
A recent decision from the 8th Circuit U.S. Court of Appeals granted employers some modest flexibility in conducting and relying on background checks for potential new hires covered by the Fair Credit Reporting Act (“FCRA”).
The court made two key rulings:
- Employers have no obligation under the FCRA to provide job applicants with the opportunity to explain negative but accurate background check results.
- A job applicant cannot bring legal action against the employer based on a “technical” violation under the FCRA that does not result in concrete harm to the applicant.
The case involved a plaintiff who applied for a job and signed a form consenting to the procurement of a criminal background check. When the company received the results, it rescinded the applicant’s job offer. She sued the company for violation of the FCRA.
In her job application, she answered that she had never been convicted of a felony. She went on to explain that she had been arrested at age 17, but was ultimately found not guilty.
Public records told a different story, revealing that not only had the applicant been tried as an adult in a murder case involving a drug deal, but she was also convicted and sentenced to 25 years in state prison and released after serving 12 years. The applicant herself authored a book in the late 1990s detailing her criminal conviction experience. Upon discovery of this information, the company immediately rescinded her offer of employment.
It is worth noting that the defendant’s actions were not fully compliant with the detailed requirements of the FCRA:
- The employer took adverse action without first providing the consumer report and advance notice of potential adverse action to the applicant, and giving her an opportunity to dispute the report.
- The disclosure form provided to the applicant in six-point font was not ideal for meeting the FCRA’s “clear and conspicuous” requirement.
- The disclosure form was not provided as a separate and distinct document, but included a myriad of other provisions.
- The authorization was technically non-conforming because it did not explicitly contain the words “consumer report” or specify that the report would be obtained for consumer purposes.
The applicant in this case was not disputing the accuracy of the background check results, but rather was claiming that she was entitled to the opportunity to explain the accurate results to the employer.
The 8th Circuit disagreed and dismissed the claims, stating that the FCRA does not provide applicants with the right to explain negative but accurate information prior to the employer taking an adverse employment action. The Court also stated that technical FCRA violations that do not cause concrete injury or harm to the applicant are not actionable in court. Since the applicant had knowingly consented to a criminal background check and the results relied on by the employer were accurate, there was no substantial injury.
The court noted, however, that there are other federal circuits that have reached different conclusions about 1) how stringently employers must adhere to FCRA and 2) what types of claims will be permitted. So, in other circuits, technical violations may not be so readily dismissed. The 8th Circuit covers Arkansas, Iowa, Minnesota, Missouri, Nebraska, and North and South Dakota.
It is important to note that the 8th Circuit’s decision was limited to issues raised under the FCRA. The U.S. Equal Employment Opportunity Commission and state law may impact how an employer must handle the results of criminal background checks.
Practical Takeaways for Employers
So, what does this court holding mean for employers? Although the claims were dismissed in this case, it was only after extensive litigation and appeals.
In order to reduce the likelihood of such claims, employers should undertake a careful policy and form review to ensure strict compliance with the FCRA’s detailed requirements. In addition, employers should provide applicants with appropriate advance notice prior to taking any adverse employment actions based on background check information.
Fair Credit Reporting Act Compliance: New California Court Opinion Clarifies the Stand-alone Disclosure Requirement
The federal Fair Credit Reporting Act (“FCRA”) permits background checks for employment purposes, so long as employers obtain authorization from and provide the appropriate “stand-alone” disclosure to the applicant or employee regarding the background check, among other requirements. Willful violations of the FCRA’s stand-alone disclosure requirement can lead to recovery of statutory damages ranging from $100 to $1,000 per violation. Thus, a central issue in FCRA cases is whether the employer’s violation is “willful,” which requires a showing that the defendant’s conduct was “intentional” or “reckless.”
A recent opinion issued in Hebert v. Barnes & Noble, Inc., 2022 WL 1165858 (California Court of Appeal, April 19, 2022) provides clarity on the FCRA’s willfulness standard. In Hebert, the plaintiff filed a class action alleging defendant Barnes & Noble’s background check disclosure form included extraneous language unrelated to the background check and therefore willfully violated the FCRA’s stand-alone disclosure requirement. Specifically, Barnes & Noble used a sample disclosure form provided by its background check company that included the following footnote:
“Please note: Nothing contained herein should be construed as legal advice or guidance. Employers should consult their own counsel about their compliance responsibilities under the FCRA and applicable state law. [The background check company] expressly disclaims any warranties or responsibility or damages associated with or arising out of information provided herein.”
Plaintiff alleged that Barnes & Noble failed to provide applicants with a “stand-alone” disclosure as required under the FCRA and sought class-wide statutory damages for the alleged “willful” violation. Defendant moved for summary judgment, arguing that any non-compliance was due to an inadvertent drafting error when it attempted to update its FCRA disclosure, and was not willful (i.e., not knowing or reckless). The trial court agreed and granted Barnes & Noble’s motion for summary judgment.
On appeal, the California Court of Appeal reversed, concluding that there was sufficient evidence by which a reasonable jury could find a willful violation of the FCRA. The court focused on the facts that: (1) one of Barnes & Noble’s employees, as well as outside counsel, was aware the extraneous language would be included in the disclosure and reviewed the disclosure before it was issued; (2) Barnes & Noble used the disclosure for nearly two years and only updated the disclosure when it switched background check companies, and not because of any kind of internal FCRA compliance efforts.
The court rejected Barnes & Noble’s arguments that the Barnes & Noble employee who reviewed the disclosure form for compliance purposes was a “non-lawyer” who was not well-versed in FCRA requirements and received only “general” training on the FCRA and that Barnes & Noble had no reason to know its disclosure form violated the FCRA because it received no complaints from job applicants. The court noted that Barnes & Noble may have acted recklessly by “delegating all of its FCRA compliance responsibilities to a human resources employee who, by his own admission, knew very little about the FCRA” and reasoned that Barnes & Noble’s “continued and prolonged use” of the “problematic” disclosure form could suggest recklessness because Barnes & Noble had no proactive, routine monitoring system for FCRA compliance.
European Data Protection Authorities (“DPAs”) have issued some headline grabbing cookie-related fines so far this year; on 6 January 2022, CNIL issued whopping fines of €150million against Google and €60million on Facebook (read more here) and similarly this month, Hamburg’s regulator warned Google and YouTube that their cookie banners, which collect data on users for targeted advertisements, do not comply with the transparency and consent requirements under the established by the e-privacy Directive and the GDPR.
The law requires that the rejection of cookies has to be as easy for users as the setting of cookies. Complicated user consent mechanisms will face harsh criticism from the regulators.
In the wake of these fines, we have seen DPAs carrying out audits on their countries’ largest e-commerce platforms to reveal how few are compliant with the European cookie laws (the ‘e-privacy Directive’ (implemented in the UK by PECR) and the GDPR).
For example, the Latvian DPA – Data State Inspectorate (“DSI”) – launched an audit on the websites belonging to 26 of its country’s largest e-commerce platforms and found that every website violated at least one of the EU laws on cookies. The DSI found that none of the websites had adequate consent mechanisms for placing cookies in users’ browsers; a mandatory requirement under the GDPR and Latvia’s Information Society Services Act.
The DSI has not issued any fines or other penalties at this stage, but instead adopted a “consult first” principle and served cure notices on each company with compliance deadlines of either 11 April or 12 August 2022, depending on the severity of the violation. The DSI warned that if the companies fail to correct the breaches within the deadline, they will then exercise “other powers” granted to it under the GDPR.
A further audit of 1,000 of the largest websites in the United States found that 67% were not using cookies in compliance with EU laws. PYMNTS commented that the study revealed “43% of websites not offering users the ability to opt out of selling data, 55% failing to notify users of cookies when they visit the site for the first time, and 32% of sites containing ad trackers.”
Although the e-privacy Directive and GDPR are European laws and thus, outside the jurisdiction of the US, websites originating in the US must still modify their practices to ensure compliance if they intend to also sell goods and services to customers residing in the EU.
Round 2 for noyb
Last month, NGO noyb launched its second round of action against website operators whose cookie banners do not comply with the e-privacy Directive and has issued more than 270 draft regulatory complaints. This follows almost one year on from noyb’s first batch of draft complaints sent in May 2021, where it claimed to have filed a total of 456 complaints with 20 different DPAs. Noyb boasts that in the first round, 42% of all violations were remedied by companies within 30 days.
Cookie compliance remains a key area of focus for regulators across Europe and should, as a consequence, continue to be high on the corporate agenda to avoid fines and action from interested third parties, such as nyob.
Bank of Ireland fined €463,000 for breaches of GDPR
The Data Protection Commission recently announced its decision to fine Bank of Ireland (“BOI”) €463,000 for a number of breaches of the General Data Protection Regulation (“GDPR”).
The DPC’s announcement came following an inquiry by the regulator into 22 data breach notifications made by BOI between November 2018 and June 2019. The DPC found that 19 of the breaches met the definition of a personal data breach. One such breach affected approximately 47,000 data subjects, even though BOI’s initial notification said only one individual was affected. The notifications related to corruption of personal data in BOI’s data feed to the Central Credit Register (“CCR”), a centralised system managed by the Central Bank of Ireland, which collects and stores information about loans. The incidents included unauthorised disclosure by BOI of customer data to the CCR and accidental alteration of certain customer data stored on the CCR – such alterations may have damaged customers’ credit ratings and prevented them getting loans.
Ultimately, the DPC’s inquiry found that BOI breached a number of provisions of the GDPR, including:
- Article 32 – BOI failed to implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk presented in transferring customer data to the CCR;
- Article 33 – BOI failed to report 17 data breaches without undue delay; and
- Article 34 – BOI failed to notify data subjects affected by the breach without undue delay in circumstances where the breach was likely to result in a high risk to those data subjects’ rights and freedoms.
In response to the DPC’s decision, BOI said that it “fully acknowledges” and “sincerely apologises” for the breaches and advised that it has taken measures to improve its ongoing CCR reporting. Pursuant to section 143 of the Data Protection Act 2018 (as amended) where an organisation does not appeal the DPC’s decision within 28 days, the DPC must apply to the Circuit Court to affirm its decision.
The decision to fine BOI follows the release of the DPC’s annual report in February 2022 which outlined that the regulator had, as at 31 December 2021, 81 statutory inquiries on hand. Therefore, it is likely we will see more fines being handed down by the DPC as the year progresses.
Ninth Circuit Provides Guidance on Web Scraping
On April 18, the Ninth Circuit issued its opinion in hiQ Labs, Inc. v. LinkedIn Corporation in which the court clarified its position on an important topic: whether the common practice of data “web scraping” can create criminal liability under the Computer Fraud and Abuse Act (CFAA). To be clear, the Ninth Circuit was not afforded the opportunity to directly rule on the question of whether web scraping may violate the CFAA, but in reaffirming a district court’s grant of a preliminary injunction, the Ninth Circuit strongly indicated that it does not believe the commonly used method of fetching and extracting data from websites violates federal law — even if a website has stated in its terms of service or otherwise that such activity is prohibited. Importantly, the court did not address any other potential claims that might arise from web scraping, such as intellectual property infringement claims.
I. The Computer Fraud and Abuse Act and Accessing Computers “Without Authorization”
Enacted in 1986, the CFAA amended the first federal computer fraud law, with the intent to address and criminalize hacking. The plain language of the law states, ”[w]hoever … intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains … information from any protected computer … shall be punished” by fine or imprisonment. The statute defines “protected computer” broadly, allowing courts to hold that the statute applies to effectively any computer connected to the internet. As websites are stored on servers — “computers that manage network resources and provide data to other computers” — liability under the CFAA can arise where individuals access a website “without authorization” or in a manner that “exceeds authorized access.”
For years, courts have debated the meaning of “without authorization” under the CFAA. The question centers on whether “the CFAA is best understood as an anti-intrusion statute” or a “misappropriation statute.” Is liability limited to traditional hackers who infiltrate a computer’s security systems to gain access to information unavailable to the public? Or can “without authorization” apply to circumstances where someone has been granted access to a computer but then uses the computer in a manner not “authorized” by the owner’s rules? For example, if an employee is given a laptop for “work purposes only,” does the user access the computer “without authorization” when she browses a website for a new pair of shoes or tunes into a basketball game in the middle of March Madness? The latter interpretation may seem absurd, but in relying on the plain text of Section 1030(a)(2)(C), many circuits have held that even violation of an employer’s internal rules or a website’s terms of service, user agreements, confidentiality agreements, or other similar contractual terms could constitute a violation of the CFAA.
In hiQ Labs v. LinkedIn, the Ninth Circuit faced a new variation of this question. HiQ is a data analytics company that “scrapes” information from LinkedIn, a web-based social media service, for information included on public LinkedIn profiles. The company collects information like names, job titles, work history, and skills and filters that information through a predictive algorithm to sell clients information regarding, for example, which employees may be most likely to take a new position or what skills their workforce seems to be lacking.
In 2017, LinkedIn sent hiQ a cease-and-desist letter, asserting that hiQ violated LinkedIn’s user agreement and demanded that hiQ stop accessing and copying data from LinkedIn’s servers. LinkedIn’s letter further stated that continued access by hiQ would violate state and federal law, including California Penal Code 502(c), the CFAA, common law of trespass, and the Digital Millennium Copyright Act. LinkedIn also claimed that the company’s continued access to its information would violate the CFAA and announced its intention to implement certain technical measures to prevent hiQ from accessing LinkedIn’s website. HiQ then moved for a preliminary injunction against LinkedIn, asking a district court to enjoin LinkedIn’s implementation of the protective technical measures on the grounds that the conduct constituted a tortious interference with its contracts with its paying customers. To hold in favor of hiQ, the district court had to find that hiQ was “likely to succeed on the merits” on its claim against LinkedIn. This necessarily involved the court’s consideration of whether hiQ’s conduct violated the CFAA. If so, hiQ’s tortious interference contract claim must fail: hiQ could not base a lawsuit on LinkedIn’s decision to block it from conducting an illegal activity.
Upon remand from the U.S. Supreme Court, the Ninth Circuit issued a lengthy and detailed opinion, upholding the district court’s decision granting the injunction and found that hiQ had made the required showing by raising at least a “serious question” as to whether web scraping constitutes unauthorized access of a website in violation of the CFAA. In doing so, the Ninth Circuit reaffirmed its position on the side of the circuit split that considers CFAA to be an anti-intrusion statute that prohibits “breaking and entering” or what is traditionally thought of as “hacking.” In the court’s eyes, the operative question under the CFAA is whether the computer’s gates are up or down. Unauthorized access occurs when an individual breaks past the gates when they are “up.” Public websites like LinkedIn, however, have no gates to data accessibility, and in such context, the CFAA simply does not apply. As the court concluded:[I]t appears that the CFAA’s prohibition on accessing a computer ‘without authorization’ is violated when a person circumvents a computer’s generally applicable rules regarding access permissions, such as username and password requirements, to gain access to a computer. It is likely that when a computer network generally permits public access to its data, a user’s accessing that publicly available data will not constitute access without authorization under the CFAA.
Again, to be clear, in reaching this conclusion, the Ninth Circuit did not officially declare web scraping to be legal, and LinkedIn has stated it will continue to litigate the merits of this case. The court’s opinion, however, is certainly notable and provides a roadmap for those evaluating compliance with the CFAA. If and when another court is afforded the opportunity to rule on this question, surely the Ninth Circuit’s well-reasoned and detailed opinion will provide crucial guidance and carry great persuasive weight.
Private Vs Public Ownership
In its decision, the Ninth Circuit made an important distinction between information that is “private” and information that is “public.” According to the court, the legislative history of Section 1030 of the CFAA “makes clear that the prohibition on unauthorized access is properly understood to apply only to private information.” By this, the court did not mean privately owned information — the question is not whether the information is owned by a private individual or business or by the government. Instead, the question is whether a website has erected any gates. Information may be “delineated as private” and accordingly protected by the CFAA “through use of a permission requirement of some sort.”
Presumably, this means that any private individual or government entity can erect technological barriers to protect its information from access by certain individuals, including web scrapers. Of course, government agencies are subject to the Freedom of Information Act (FOIA) and state equivalents, meaning there are limits on the types of information the government may keep confidential. But the government, just like any other entity, may create websites with gates that prevent web scrapers from fetching and aggregating data by closing such information off from the general public with password protections or other technological restrictions.
Issues Still Remaining
While it does offer insight into the Ninth Circuit’s perspective, the hiQ Labs decision also leaves open a number of important questions. For example, the court did not address the question of whether, or the extent to which, government bodies may erect barriers to access otherwise public information. The court also did not consider whether if a government or private entity seeks to restrict access to information — by password or through “softer” restrictions like implementation of a CAPTCHA meant to prevent easy access by bots — the development of workarounds to facilitate scraping of the information sought to be restricted would be a violation of CFAA. Nor did the court address whether the answer to this second question would be different depending on if the entity seeking to restrict access is a government body or a private business, particularly given the well-established public interest in access to government records.
Beyond the issues presented by technological gates and barriers, other statutes may determine the permissibility of access to data and permissibility of blocking web scraping. For example, intellectual property statutes, such as the Copyright Act of 1976, the Lanham Act, state and federal trade secret laws, and state unfair competition laws, can provide protection to information. Copyright protects original works of creative expression fixed in a tangible medium of expression. This can include websites, mobile applications, compilations of data, and the software that enables websites and the website functionality used to query, process, and display data. Meanwhile, trademarks protect the identification of a good or service and can provide legal protection for brands. Both the Copyright Act and the Lanham Act provide litigants with a private right of action through which individuals or businesses may seek injunctive relief and/or monetary damages for infringement of their intellectual property rights. Additionally, copyright registrations for works of authorship may provide significant benefits to the registrant in litigation, including the ability to recover attorneys’ fees and seek statutory damages. Therefore, intellectual property protection for data assets should be carefully considered by any web-based business that permits access to its data or engages in web scraping.
What hiQ Means for Privacy Litigation
HiQ is not the only company that “scrapes” data as a business model. Another example is Clearview AI, a facial recognition company that has scraped billions of pictures from the internet to create a massive facial recognition database. Clearview AI’s database includes more than three billion photos of individuals — all scraped from social media sites, including Facebook, Twitter, Instagram, and Google. Clearview AI contracts to provide facial recognition services for law enforcement and other agencies.
While the Ninth Circuit’s holding in hiQ indicates that Clearview AI and other companies that have adopted scraping data as a business model can avoid criminal liability under CFAA, web scrapers may still face legal challenges under state law regimes and intellectual property laws. For example, Clearview AI currently faces litigation in Illinois under the state’s Biometric Privacy Information Act (BIPA). Web scrapers may also see legal challenges based on state unfair competition and unfair and deceptive acts and practices (UDAAP) statutes, which have been adopted in various forms in all 50 states. The remedies available under state UDAAP laws vary but often include injunctive relief, as well as civil penalties and compensatory damages and attorneys’ fees. Further heightening the stakes, state attorneys general also can enforce these statutes.
While the Ninth Circuit’s decision in hiQ is significant, the fight over the legality of web scraping is far from over. The hiQ decision may provide a roadmap for where the Ninth Circuit and some other courts are headed on the issue of its legality under the CFAA, but there remain courts that have treated the CFAA as a misappropriation statute. Those courts may be more willing to side with companies like LinkedIn that hope to block the practice when they violate the terms of service or other contractual agreements. Further, even if all courts follow the Ninth Circuit, data scraping may still face legal challenges under other state and federal statutes.
For companies and government agencies looking to prevent data scraping on their websites, the hiQ decision suggests they should consider launching websites with technological barriers to entry, such as passwords or paywalls, that prevent general public access to their information. Additionally, if companies believe they may have information protected under copyright or trademark law, they should discuss with counsel the possibility of registering their intellectual property for added protection in addition to using appropriate intellectual property markings, such as copyright and trademark notices.
For companies using web scraping methods, the hiQ decision suggests they may be able to avoid liability under the CFAA for their actions. Web scrapers should remain aware of other potential legal challenges, which could result in significant monetary penalties or injunctive constraints.