March 2023 Screening Compliance Update


March 2023 Screening Compliance Update


ClearStar is happy to share the below industry related articles written by subject matter experts and published on the internet in order to assist you in establishing and keeping a compliant background screening program.


Creditors and Employers Beware: CFPB Amends Model Background Check Form and Adverse Action Language

The Consumer Financial Protection Bureau (CFPB or Bureau) issued a final rule updating, among other things, the model form for the Fair Credit Reporting Act (FCRA) Summary of Consumer Rights and information that must be included in adverse action notices under the Equal Credit Opportunity Act (ECOA). Specifically, the CFPB is correcting the contact information in the Summary of Consumer Rights model form for multiple federal agencies (including the Office of the Comptroller of the Currency (OCC) and the Federal Deposit Insurance Corporation (FDIC)), updating references to obsolete business types, and making other technical corrections. For ECOA, the Bureau is amending appendix A, which contains federal agency contact information that creditors must include in ECOA adverse action notices, and correcting its own contact information in appendix D.

As background, the Summary of Consumer Rights explains certain consumer rights available under the FCRA. Consumer reporting agencies (CRAs) must provide the Summary of Consumer Rights: (a) with each written disclosure from the CRA to a consumer (15 U.S.C. § 1681g(c)(2)(A)); and (b) with, or prior to providing, a consumer report for employment purposes (15 U.S.C. § 1681b(b)(1)(B)). A user must provide the Summary of Consumer Rights: (a) with the required disclosure prior to procuring an investigative consumer report (15 U.S.C. § 1681d(a)(1)); and (b) with pre-adverse action notices for employment purposes (15 U.S.C. § 1681b(b)(3)).

The updated model form is available on the CFPB website in English and Spanish.

As for ECOA, § 1002.9(b)(1) provides model language that satisfies certain disclosure requirements of 12 CFR 1002.9(a)(2) relating to adverse action notices. These notices must include federal agency contact information located in appendix A to Regulation B. The Bureau is revising appendix A to update agency contact information, including the OCC, FDIC, and the Federal Trade Commission. Additionally, appendix D to § 1002 sets forth the process by which entities may request official Bureau interpretations of Regulation B. The CFPB is amending paragraph 2 in appendix D to correct the zip code for the Bureau and to replace the reference to the Division of Research, Markets, and Regulations with a reference to the new, expanded Division of Research, Monitoring, and Regulations.

The rule becomes effective April 19, 2023, but the mandatory compliance date for the amendments to the FCRA Summary of Consumer Rights is March 20, 2024. As a result, CRAs, employers and creditors who are using the model forms and language have one year to update their forms and language.

Click Here for the Original Article

CFPB and FTC issue request for information on background screening in connection with rental housing

The Consumer Financial Protection Bureau and Federal Trade Commission issued a request for information (RFI) yesterday seeking comment on “background screening issues affecting individuals who seek rental housing in the United States, including how the use of criminal and eviction records and algorithms affect tenant screening decisions and may be driving discriminatory outcomes.”  Comments in response to the RFI must be received by May 30, 2023.

The RFI follows the White House’s release last month of a “Blueprint for a Renters Bill of Rights” (Blueprint), which set forth principles intended to “create a shared baseline for fairness for renters in the housing market” and directed various federal agencies, including the CFPB and FTC, to take various actions to further those principles.  Among those actions was the issuance of RFIs “seek[ing] information on a broad range of practices that affect the rental market, including the creation and use of tenant background checks, the use of algorithms in tenant screenings, the provision of adverse action notices by landlords and property management companies, and how an applicant’s source of income factors into housing decisions.”

The RFI encourages the submission of comments and information by “tenants, prospective tenants, tenants’ rights and housing advocacy groups, industry participants (including property managers, commercial landlords, individual landlords, and consumer reporting agencies that develop credit and tenant screening reports used by landlords and property managers to screen prospective tenants), other members of the public, and government agencies.”  The RFI is divided into the following four sections that each contain a series of wide-ranging questions:

  1. Tenant screening generally
  2. Criminal records in tenant screening
  3. Eviction records in tenant screening
  4. Using algorithms in tenant screening

The CFPB has previously issued two reports on tenant background checks, one discussing consumer complaints received by the CFPB that relate to tenant screening by landlords and the other discussing practices of the tenant screening industry.

Click Here for the Original Article

CFPB Launches Inquiry into the Business Practices of Data Brokers

The Consumer Financial Protection Bureau (CFPB) has launched an inquiry into companies that track and collect information on people’s personal lives. In issuing this new Request for Information, the CFPB wants to understand the full scope and breadth of data brokers and their business practices, their impact on the daily lives of consumers, and whether they are all playing by the same rules. This request is a chance for the public to share feedback about companies that play a significant role in people’s lives and in the economy. This feedback will shed light on the current state of an industry that largely operates out of public view, and inform the CFPB’s future work to ensure that these companies comply with federal law.

“Modern data surveillance practices have allowed companies to hover over our digital lives and monetize our most sensitive data,” said CFPB Director Rohit Chopra. “Our inquiry will inform whether rules under the Fair Credit Reporting Act reflect these market realities.”

Congress passed the Fair Credit Reporting Act (FCRA) in response to concerns about data brokers assembling detailed dossiers about consumers and selling this information to those making employment, credit, and other decisions. People often have little choice about whether to enter into business relationships with these companies or whether they will be tracked, yet the data these companies collect may nevertheless play a decisive role in significant life decisions, like buying a home or finding a job. The FCRA provides a range of protections, including accuracy standards, dispute rights, and restrictions on how data can be used. The law covers data brokers like credit reporting companies and background screening firms, as well as those who report information to these firms.

The inquiry seeks information about business practices employed in the market today to inform the CFPB’s efforts to administer the law, including planned rulemaking under the FCRA. The CFPB is interested in hearing about the business models and practices of the data broker market, including details about the types of data the brokers collect and sell and the sources they rely upon. The feedback received will help the CFPB gain a better understanding about the current state of business practices in this area. The CFPB is also interested in hearing about people’s direct experiences with these companies, including when individuals attempt to remove, correct, or regain control of their data.

The request for information will be published in the Federal Register, and the public will have until June 13, 2023 to submit their comments.

Click Here for the Original Article

CFPB and FTC issue request for information on background screening in connection with rental housing

The Consumer Financial Protection Bureau and Federal Trade Commission issued a request for information (RFI) yesterday seeking comment on “background screening issues affecting individuals who seek rental housing in the United States, including how the use of criminal and eviction records and algorithms affect tenant screening decisions and may be driving discriminatory outcomes.”  Comments in response to the RFI must be received by May 30, 2023.

The RFI follows the White House’s release last month of a “Blueprint for a Renters Bill of Rights” (Blueprint), which set forth principles intended to “create a shared baseline for fairness for renters in the housing market” and directed various federal agencies, including the CFPB and FTC, to take various actions to further those principles.  Among those actions was the issuance of RFIs “seek[ing] information on a broad range of practices that affect the rental market, including the creation and use of tenant background checks, the use of algorithms in tenant screenings, the provision of adverse action notices by landlords and property management companies, and how an applicant’s source of income factors into housing decisions.”

The RFI encourages the submission of comments and information by “tenants, prospective tenants, tenants’ rights and housing advocacy groups, industry participants (including property managers, commercial landlords, individual landlords, and consumer reporting agencies that develop credit and tenant screening reports used by landlords and property managers to screen prospective tenants), other members of the public, and government agencies.”  The RFI is divided into the following four sections that each contain a series of wide-ranging questions:

  1. Tenant screening generally
  2. Criminal records in tenant screening
  3. Eviction records in tenant screening
  4. Using algorithms in tenant screening

The CFPB has previously issued two reports on tenant background checks, one discussing consumer complaints received by the CFPB that relate to tenant screening by landlords and the other discussing practices of the tenant screening industry.

Click Here for the Original Article

FTC Bans BetterHelp, Inc. From Revealing Consumers’ Sensitive Health Data to Third Parties for Advertising

The practice of turning over our personal data to online platforms is nothing new and, some may argue, a routine practice of using the Internet today. A survey by the Pew Research Internet Project reveals that roughly six-in-ten U.S. adults do not think it is possible to go through daily life without having data collected about them by companies or the government – and more than 60% of people are concerned about this.

Most of the time, providing this information feels non-threatening. For example, consider going to the grocery store and entering your phone number for your loyalty rewards account. Harmless enough. What we do not tend to think about, though, is that phone number then attaching to your name and all of your shopping preferences each time you enter it into the kiosk and begin to scan your items. Now, by providing merely your phone number, a substantial profile of your eating habits now exists, explaining why you might receive just the coupon you needed in the mail only a few days later. This is no coincidence, and, indeed, many times, it is an added convenience to everyday life.

But when does the collection of personal data cross the line? The FTC helped answer this question through their issuance of a proposed order last week following an investigation into the practices of BetterHelp, Inc. (“BetterHelp”), an online platform that provides a variety of mental health services.

BetterHelp aims to pair individuals with online counselors that fit their particularized needs. To do so effectively, consumers must answer a series of questions that inquire into their sensitive health information from whether they have ever experienced suicidal thoughts to whether they are on any medications. This is in addition to the, what has become, “standard”, personal information they must provide, like their name, email address, and date of birth. Throughout this process, consumers are reassured every step of the way by BetterHelp’s representation that it does not use or disclose this personal health data. For example, directly below the question inquiring about whether the consumer is currently taking any medication is the following statement: “Rest assured – your health information will stay private between you and your counselor.” Again, seems harmless enough.

However, following the FTC’s investigation, the Commission discovered this was not the case. Indeed, all along, BetterHelp would turn consumers’ email addresses, IP addresses, and health questionnaire information over to Facebook, Snapchat, Criteo and Pinterest to fuel the ability of these third parties to target potential new clients with ads.

“For example, the company used consumers’ email addresses and the fact that they had previously been in therapy to instruct Facebook to identify similar consumers and target them with advertisements for BetterHelp’s counseling service, which helped the company bring in tens of thousands of new paying users and millions of dollars in revenue,” said the FTC website.

Now, the FTC is taking action by agreeing 4-0 to issue a proposed order that would require the company to pay a staggering $7.8 million to provide partial refunds to consumers who signed up and paid for BetterHelp’s services between August 1, 2017 and December 31, 2020.

The action is the first of its kind that would return funds directly to consumers impacted by the disclosure of their health data.

“When a person struggling with mental health issues reaches out for help, they do so in a moment of vulnerability and with an expectation that professional counseling services will protect their privacy,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Instead, BetterHelp betrayed consumers’ most personal health information for profit. Let this proposed order be a stout reminder that the FTC will prioritize defending Americans’ sensitive data from illegal exploitation.”

Additionally, the proposed order:

  • Imposes a complete ban on BetterHelp’s disclosure of health information for advertising;
  • Requires BetterHelp to obtain affirmative express consent before disclosing personal information to certain third parties for any purpose;
  • Requires BetterHelp to establish a comprehensive privacy program with the purpose of protecting consumer data;
  • Requires BetterHelp to direct third parties to delete the consumer health and other personal data that BetterHelp revealed to them; and
  • Limits how long it can retain personal and health information according to a data retention schedule.

A description of the consent agreement package is set for publication in the Federal Register in the coming days where it will be subject to public comment for 30 days. Following public comment, the Commission will make its definitive decision as to whether to make the proposed consent order final. Instructions for filing comments will appear in the published notice, and once processed, will also be available for public viewing at

The FTC action serves as an important reminder to companies that having a sound privacy policy in place is only half the battle. Being compliant on paper is only as good as the company’s ultimate actions. Working with DW’s Privacy and Cybersecurity Team can reassure companies that they are not only displaying a privacy policy that is in compliance with the ever-changing labyrinth of privacy laws, but that they are sticking to the words contained in that policy through their everyday actions – particularly when it is, and when it is not, OK to disclose data.

Click Here for the Original Article

FTC Orders Small Business Credit Reporting Companies to Disclose Information About Industry

The Federal Trade Commission (FTC) announced that it is launching an inquiry into the small business credit reporting industry. Specifically, it is ordering five firms to provide detailed information about their products and processes.

According to the FTC, the impetus for this inquiry is that unlike consumer reports, which are governed by the Fair Credit Reporting Act, there is no federal law that specifically outlines processes and protections for small businesses credit reporting. The FTC says this can cause confusion, particularly for small businesses attempting to correct errors or omissions. According to the FTC, “[s]ometimes small businesses only discover they have a credit report when they are denied credit by a supplier.”

The orders issued to the five firms require them to provide:

  • Their processes for gathering, generating, and organizing data related to small businesses.
  • The steps taken to ensure that the information contained in the credit reports is accurate.
  • The number of data contributors with which they currently have data contribution agreements and the steps taken to ensure the data provided is accurate.
  • All business credit scores that are included in their reports, and the factors, information, and data included in arriving at each business credit score.
  • Any algorithms, machine learning, or other automated systems that are used in relation to the business credit report data.
  • All free-of-charge services available for entities to view their own report or information.
  • Information about credit monitoring products that they sell, including sales revenue.
  • Information about marketing materials used to sell small business credit reports, including gross sales revenue.
  • Any practices or product features to provide updates or corrections to business report customers in situations where information about an entity is corrected after the report is obtained.

The firms will have 60 days after service to respond with the requested information and documents.

Click Here for the Original Article


Current Issues Under an Old Law: Wisconsin’s Arrest and Conviction Record Protected Classification

Although the Wisconsin Fair Employment Act (WFEA) has included arrest and conviction record as a category protected from discrimination since 1977, a decision of the Wisconsin Supreme Court last year demonstrates that the contours of protection under the law are still being developed. In general, the law requires any Wisconsin employer (with some limited exceptions such as schools dealing with unpardoned felons) to establish that a “substantial relationship” exists between the circumstances of the arrest or charge (in order to suspend an employee) or the conviction (to refuse hiring or terminate employment).

In Cree, Inc. v. LIRC, 2022 WI 15, the Wisconsin Supreme Court explained that “Wisconsin’s laws regarding employment discrimination based on conviction record serve two important, and sometimes competing interests – rehabilitating those convicted of crimes and protecting the public from the risk of criminal recidivism.” In Cree, an offer of employment to Derrick Palmer was rescinded after the company learned that Palmer had been convicted of eight domestic violence crimes. The primary issue in the case became whether the nature of such crimes presented enough of a danger in the workplace to satisfy the substantial relationship test. The Administrative Law Judge who conducted the hearing ruled that it did, the Labor and Industry Review Commission (LIRC) said it did not, the Racine County Circuit Court said it did, then the Wisconsin Court of Appeals ruled LIRC was correct and reversed the Circuit Court. Much of the discussion in the seesaw of decisions focused on whether Palmer’s history of domestic violence was more likely to be an isolated threat in Palmer’s intimate personal relationships, rather than a broader threat to customers or co-workers encountered in the workplace. The Wisconsin Supreme Court ultimately reversed the Court of Appeals and concluded the purpose of the substantial relationship test is to assess whether “tendencies and inclinations” to behave in a certain way would be likely to reappear later in similar contexts. The decision makes clear that domestic violence convictions must be assessed the same way as other convictions involving violent behavior. It rejected the notion that because domestic violence perpetrators have a relationship with their victims, there is not as much of an indicator of generally violent tendencies as would be the case with a conviction involving the assault of strangers. Undertaking that analysis, the Supreme Court focused on similar opportunities for Palmer to isolate victims in the Cree workplace and the character trait of a willingness to use violence against others when angry. The Court reasoned that interpersonal relations with co-workers in the employment at Cree, coupled with minimal day-to-day supervision of that particular job, could provide opportunities for violent behavior if disputes arose in the workplace.

At the same time, the Supreme Court emphasized that its holding was based on the specific circumstances of the job at Cree stating, “Nothing in this opinion condemns all domestic violence offenders to a life of unemployment.” A vigorous dissent argued that the majority opinion inappropriately interjected “character traits” into the analysis when that factor is not referenced in the WFEA.

This case underscores the need for Wisconsin employers to carefully apply the substantial relationship test by assessing the underlying circumstances of an employee’s criminal convictions and the type of conduct engaged in by the employee which led to the conviction—rather than simply relying on the name or category of the crime. Those factors then need to be juxtaposed with the actual day-to-day functions and factors at play in the workplace, and not merely compared to a position’s title or job description. The more sensitive the work environment (e.g., vulnerable customers or patients encountered in the job) the more likely a wider range of offenses could be argued to present a risk sufficient to satisfy the substantial relationship test as viewed in Cree.

Click Here for the Original Article

The New Virginia Consumer Data Protection Act

Last year, Virginia became the second state (after California) to enact a consumer privacy law. That law, the Virginia Consumer Data Protection Act (VCDPA), went into effect on January 1, 2023.

Though the new law is modeled after the California Consumer Privacy Act of 2018 (CCPA), it stops short of the sweeping protections in the California law because it excludes employee and business representative data. However, businesses must still be mindful of the obligations the new Virginia law creates.

Here are the key provisions of the VCPDA that businesses need to understand.

Who does the VCDPA protect?

The VCDPA protects “consumers,” defined as Virginia residents acting in an individual or household context. The law excludes individuals acting in an employment or commercial context from protection.

Under the VCDPA, consumers have the right to know whether controllers — the companies that determine why and how to process personal data — are processing their personal data.

Consumers also have the right to access, correct inaccuracies in, and delete their personal data. Additionally, the law affords them the right to download a portable copy of their data in a format that allows them to transmit the data to another controller. Consumers also can opt out of the sale and use of their data for targeted advertising or profiling that has a “legal or similarly significant effect” on them.

What data does the VCDPA safeguard?

The VCDPA defines “personal data” as any information linked or reasonably linkable to an identified or identifiable individual. “Personal data” does not include any publicly available or de-identified data, which is any data that cannot be linked to a person.

The law also exempts certain types of data from its coverage. The law does not apply to data held by a public utility, employment records, protected health information processed by covered entities and business associates under the Health Information Portability and Accountability Act (HIPAA), and other types of information already regulated under other federal laws, including the Gramm-Leach-Bliley Act (GLBA), the Family Educational Rights and Privacy Act (FERPA), the Fair Credit Reporting Act (FCRA), the Children’s Online Privacy Protection Act (COPPA), and the Farm Credit Act.

Who is subject to the VCDPA?

The VCDPA applies to controllers — a company that determines the purposes for and means of processing personal data — and processors — a company that processes personal data on behalf of a controller — that meet two requirements:

  1. They conduct business in Virginia or sell products or services intentionally targeted to Virginia residents.
  2. They control or process the personal data of 100,000 or more consumers during a calendar year or control or process the personal data of at least 25,000 consumers and derive over 50 percent of gross revenue from the sale of personal data.

Nonprofits, public companies, and higher education institutions are exempt from the law, as are financial institutions regulated by the GLBA and some healthcare entities that fall under HIPAA.

What steps should businesses take to comply with the VCDPA?

In addition to limiting the collection of data to what is “adequate, relevant, and reasonably necessary” for business purposes, the VCDPA requires businesses to take several steps to ensure compliance to avoid injunctions and penalties of up to $7,500 per violation.

  1. Issuing a privacy notice

Controllers must prepare a privacy notice that explains to consumers what data they are collecting and why they are collecting it. The notice must also explain how consumers can exercise their rights, including their right to appeal. This means controllers must share their contact information. The notice must also detail whether the company shares any of the collected personal data with third parties, explaining the categories of data shared and describing the categories of the third parties.

Any controller that sells personal data to third parties or that processes personal data for targeted advertising must disclose these practices and explain how consumers can opt out of this processing. Note that the VCDPA defines the “sale” of personal data as the exchange of data for monetary consideration. It does not consider a “sale” to include disclosing personal data to a processor that processes the personal data on the controller’s behalf, disclosing or transferring data to a third party to provide a product or service requested by a consumer, disclosing or transferring personal data to an affiliate of the controller, disclosing data as part of a transaction like a merger or acquisition, or disclosing personal data that consumers have intentionally made available to the public through mass media.

  1. Requiring consumers to opt-in before processing sensitive data

Consumers must consent before a controller can process their sensitive personal information. The law defines “sensitive data” as follows:

  • Personal data that reveals an individual’s race, ethnic origin, religious beliefs, mental or physical health diagnoses, sexual orientation, or citizenship or immigration status
  • Personal data that comes from a child under 13
  • Genetic or biometric data processed to identify an individual
  • Precise geolocation data

Businesses such as delivery app services, fitness trackers, and location-based services must obtain consumers’ opt-in consent before processing personal data. They must also obtain consent from parents or guardians in compliance with COPPA before processing a minor’s data.

  1. Implementing data security practices

Controllers should implement “reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data.” These practices must be “appropriate to the volume and nature of the personal data at issue.”

Before processing data, companies must also conduct a data protection assessment for processing personal data for targeted advertising, selling personal data, processing personal data for profiling if it presents a risk of unfair treatment or injury to consumers, processing of sensitive data, and processing activities that present a “heightened risk of harm” to consumers. The assessment should weigh the benefits of processing to controllers, consumers, other stakeholders, and the public against the potential risks to consumers. It should also determine whether any risks can be mitigated by safeguards. Using de-identified data and meeting consumers’ reasonable expectations should be factored into the assessment.

The attorney general may request controllers to produce their data protection assessments in an investigation. However, these assessments are exempt from FOIA requests.

  1. Signing a data processing arrangement

Before a processor processes data on a controller’s behalf — including the collection, use, storage, disclosure, analysis, deletion, or modification of personal data — the controller and processor must enter a contract with five requirements:

  1. Keeping the data confidential
  2. Deleting or returning all personal data to the controller at the end of the relationship except as required by law
  3. Making data available to the controller upon request
  4. Cooperating with third-party assessments
  5. Creating similar agreements with any subcontractors

All processors must follow controllers’ instructions and help controllers meet their obligations under the VCDPA.

  1. Responding to data subject requests

When a controller receives a request from a consumer, it must respond within 45 days. This deadline may be extended another 45 days if the controller timely notifies the consumer of the need for the extension.

If the controller decides not to take action on a request, it must notify the consumer within 45 days. If the controller provides information, it must be given free of charge to the consumer up to twice a year. The controller may charge a reasonable administrative fee to cover the costs of complying with or declining the request.

Consumers have the right to appeal a controller’s refusal to act on their request. The controller must establish an appeal process that ensures a written response, with an explanation, to consumers within 60 days of receipt of the appeal. The controller must also establish an online mechanism for contacting the attorney general to submit a complaint if the appeal is denied.

The impact of the VCDPA

The VCDPA is the latest in Virginia’s patchwork of privacy laws, which include the Personal Information Privacy Act addressing limitations on merchants’ use of personal data, the Insurance Data Security Act governing insurers, and the Data Breach Notification Law, which requires businesses and government agencies to notify residents of any breach that could lead to fraud or identity theft.

Organizations subject to the law should review their data collection and processing protocols. They should also update their privacy policies, create a procedure for handling consumer requests and appeals, and begin conducting data protection assessments.

Click Here for the Original Article

California Employers Beware: SB 1162 Expands Pay Data Reporting and Transparency Requirements

On September 27, 2022, Governor Newsom signed the Pay Transparency Act (“SB 1162”) amending California Government Code section 12999 and California Labor Code section 432.3. Of note, SB 1162 significantly changes two key areas that employers should be aware of: (i) it changes and adds to the pay data reporting requirements set forth in Gov. Code § 12999; and (ii) it imposes new salary disclosures and recordkeeping obligations to the pay transparency requirements under Lab. Code § 432.2. The changes, further described below, became effective January 1, 2023.

Updates to California Government Code Section 12999

Changes to Pay Data Reporting Obligations

Prior to SB 1162, if a private employer with 100 or more employees was required to submit an annual Employer Information Report (“EEO-1”) to the federal Equal Employment Opportunity Commission, then it was also required to submit a pay data report to the California Civil Rights Department (“CRD”) – formerly, the California Department of Fair Employment and Housing “(DFEH”). Under the pre-existing law, an employer could satisfy the state’s pay data reporting requirement by submitting a federal EEO-1 containing the same or substantially similar pay data information to the CRD on or before March 31st of each year.

Notably, under the new law, a private employer with 100 or more employees is required to submit a pay data report to the CRD regardless of whether it files a federal EEO-1 report and EEO-1 reports can no longer be submitted to satisfy the California pay data reporting requirement. requirement.

In addition to changing the format of reporting, SB 1162 expands the data that must be reported. Under the old law, employers were required to include the number of employees by race, ethnicity and sex in specified job categories in their pay data report. Starting this year, employers must also include the median and mean hourly rate for each combination of race, ethnicity and sex within each job category in their pay data reports. This is a major change as it requires employers to provide direct comparisons of pay rates between different race, ethnic and gender groups.

Employers with multiple locations were required to prepare only one report with the pay data information for the different locations for the employer. SB 1162 eliminates the consolidation aspect and requires that employers prepare and submit reports for each individual location.

The new law changes the annual filing deadline from March 31st to the second Wednesday in May each year. In 2023, the deadline will be on May 10th.

New Contractor Reporting Requirements

SB 1162 also requires private employers with 100 or more employees employed through labor contractors to submit a separate pay data report covering the labor contractor’s employees. In the separate pay data report, employers must also disclose the names of all labor contractors used to supply the employees. The law defines “[l]abor contractor” as “an individual or entity that supplies, either with or without a contract, a client employer with workers to perform labor within the client employer’s usual course of business.”

It is up to the labor contractors to provide all of the necessary pay data to the employers so that the employers can submit their reports to the CRD. In the event an employer is unable to submit a complete and accurate report due to a labor contractor’s failure to provide the requisite information, a court has the discretion to apportion penalties to the labor contractor that has failed to provide the pay data to the employer.


Under SB 1162, the CRD may seek an order requiring an employer to comply with the above reporting requirements and shall be entitled to recover the costs associated with seeking the order. Additionally, the CRD may request that a court impose civil penalties on an employer that fails to comply for an amount “not to exceed one hundred dollars ($100) per employee” for a first-time violation. For subsequent failures to submit the reports, civil penalties may increase to $200 per employee. This change is significant because under the previous law, the only remedy the CRD was equipped with was seeking a court order requiring employers to file their reports.

Updates to California Labor Code Section 432.3

Changes to Pay Scale Reporting Requirements

Prior to SB 1162, employers were required to provide candidates for employment with the pay scale for the position the candidate was seeking only upon request. “Pay scale” is defined under the Labor Code as the salary or hourly wage range that the employer reasonably expects to pay for the position. Under the new law, employers with 15 or more employees must now provide a pay scale range for each position they list in a job posting regardless of whether a candidate requests the information. This requirement applies to both internal and external job postings and also applies to employers working with a third-party (i.e., recruiters). Notably, no penalty will apply for a first violation of this requirement if the employer can show that all job postings for open positions have been updated to include the pay scale.

SB 1162 also expands pay scale requirements for current employees. Under the new law, upon request, employers must provide employees with the pay scale for the position in which the employee is currently employed.

Pay Data Record Retention Requirement

SB 1162 introduces a new record retention requirement that was not a requirement under the old law. Now, employers of all sizes must keep records for each employee detailing that employee’s job title(s) and wage rate history throughout their employment and for three (3) years after their termination. The California Labor Commissioner will have authority to inspect these records.


Failure to comply with the above pay scale disclosure and/or record retention requirements can result in penalties ranging from $100 to $10,000 per violation. Given the expansive nature of the penalties, employers are well-advised to take the time to reevaluate their current policies and practices to ensure compliance with SB 1162’s new requirements.

Click Here for the Original Article

New York Amends Statewide Pay Transparency Law

On March 3, 2023, Governor Kathy Hochul signed a series of amendments to the New York Pay Transparency Law (“NYPTL”) into law. As we previously reported, the NYPTL takes effect on September 17, 2023 and will require covered employers to include the following information in advertisements for internal and external “job, promotion, or transfer opportunities”:

  1. The compensation or range of compensation (defined as “the minimum and maximum annual salary or hourly range of compensation for a job, promotion, or transfer opportunity”) that the employer in good faith believes to be accurate at the time of posting; and
  2. The job description for the position, if one exists.

Most notably, the amendments revise the type of advertisements that must include compensation information and a job description. Prior to the March 3 amendments, the NYPTL required employers to include such information in advertisements for jobs, promotions or transfer opportunities “that can or will be performed, at least in part, in the state of New York.” Like the New York City Salary Transparency Law, that definition appeared to include both New York-based and fully remote positions. Following the March 3 amendments, however, the NYPTL provides that compensation information and a job description must be included in advertisements for jobs, promotions or transfer opportunities “that will be physically performed, at least in part, in the state of New York, including a job, promotion, or transfer opportunity that will physically be performed outside of New York but reports to a supervisor, office, or other work site in New York.” Stated another way, advertisements for the following positions will now be covered by the NYPTL: (i) those that will be physically performed in New York, even if only in part (e.g., hybrid positions or positions requiring periodic attendance in New York); and (ii) those that will be physically performed outside of New York, but report – apparently at any level in the organizational chart – to a supervisor, office or other worksite in New York.

The March 3 amendments also relaxed employers’ record-keeping requirements. Before amendment, the NYPTL required employers to “keep and maintain necessary records to comply with the requirements of the [NYPTL],” including compensation history and all job descriptions. However, the March 3 amendments eliminate this record-keeping requirement.

Finally, the March 3 amendments clarify the meaning of the term “advertise,” which is now defined in the NYPTL as “to make available to a pool of potential applicants for internal or public viewing, including electronically, a written description of an employment opportunity.”

In light of these amendments, employers should revise their compliance efforts associated with the NYPTL in advance of the September 17, 2023 effective date. In addition to analyzing compensation information and job descriptions, employers should also be sure to update and review relevant organizational charts to determine whether a position that may be advertised reports to a supervisor, office or other worksite in New York.

Click Here for the Original Article

Reminder – NYC Employers: “Automated Employment Decision Tools Law” Will Be Enforced Starting April 15

On Friday, September 23rd, the New York City Department of Consumer and Worker Protection (“DCWP”) issued a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to Local Law 144 (“the Law”), which regulates the use of “automated employment decision tools” by employers. The law was originally set to go into effect January 1, 2023.

As previously reported, in October 2022, the DCWP announced that the law would not be enforced until April 15, 2023. In addition to the other obligations under the law, which are discussed in detail here and here, it is unlawful for an employer or an employment agency to use an automated employment decision tool to screen a candidate or employee for an employment decision unless: (i) the tool has been the subject of a bias audit conducted no more than one year prior to the use of such tool; and (ii) a summary of the results of the most recent bias audit has been made publicly available on the employer’s website prior to the use of such tool.

Covered employers should immediately determine whether they are using any automated employment decision tools, as defined in the new law, and take any steps needed to ensure compliance.

Click Here for the Original Article

Connecticut’s Data Privacy Law Will Take Effect Soon!

On May 10, 2022, The Connecticut Data Privacy Act (“CTDPA”) was signed into law. Connecticut was the fifth State of the Union to pass its own consumer privacy regulations. The CTDPA is one of the most comprehensive and consumer-friendly state privacy laws to date. The CTDPA will take effect on July 1, 2023.

The CTDPA is designed to both protect the privacy rights of Connecticut State residents and establish corresponding privacy protection responsibilities for companies doing business in the State. The law is modeled, in large part, after the Colorado Privacy Act (“CPA”) and Virginia’s Consumer Data Protection Act (“CDPA”). The CTDPA, CPA, and CDPA, respectively, are all consumer-friendly regulations, and have very similar provisions. It is important that companies conducting business in Connecticut become acquainted with the State’s new privacy law. Key provisions may be unfamiliar, and the CTDPA is scheduled to take effect in under four months.

Defining Key Terms in Connecticut’s New Privacy Law

Connecticut’s new privacy law applies to “controllers” and “processors” of data – language also found in the CPA and CDPA. A controller is “an individual who, or legal entity that, alone or jointly with others determines the purposes and means of processing personal data”. A processor is “an individual who, or legal entity that, processes personal data on behalf of a controller.” The CTDPA defines a consumer as an individual residing in Connecticut that is acting as a private person. This is important, because individuals who are “acting in a commercial or employment context,” which includes applying for a job, are excluded from the protections afforded by Connecticut’s consumer privacy law.

The CTDPA has heightened standards of protection when it comes to sensitive data. Sensitive data includes:

(A) data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation or citizenship or immigration status;

(B) the processing of genetic or biometric data for the purpose of uniquely identifying an individual;

(C) personal data collected from a known child (under the age of 13); and

(D) precise geolocation data.

What Are Some of the Key Provisions of the CTDPA?

Connecticut’s new privacy law applies to organizations that:

(A) control or process the personal data of 100,000 or more consumers annually, unless the personal data is controlled or processed solely for the purpose of completing a payment transaction; and/or

(B) derive over 25% of their gross revenue from the sale of personal data, and

control or process the personal data of 25,000 or more consumers.

Under the CTDPA, Connecticut consumers have the right to directly contact a controller to access personal data that the controller has collected about them, correct any inaccuracies in this data, and request that their data, including data collected through third parties, be deleted. Connecticut’s privacy law requires that controllers allow consumers to opt-out of the sale of their personal data and/or the processing of personal data for the purposes of targeted advertising. In addition, processors must not collect, use, store, disclose, or analyze any personal data without obtaining consumer consent.

Hire Experienced Attorneys to Help You Comply with Connecticut’s New Privacy Law

The CTDPA is comprised of twenty-seven pages of comprehensive and consumer-friendly regulatory provisions. As mentioned above, this law is scheduled to take effect on July 1, 2023. The key provisions discussed hereinabove are only a few of the many that the Connecticut State Attorney General will start holding companies accountable for in July. Businesses operating in Connecticut should soon come into compliance with the CTDPA, if they have not already.

Click Here for the Original Article

Michigan Amends Civil Rights Act to Include LGBTQ Protections

On March 16, 2023, Michigan Governor Gretchen Whitmer signed a bill that expands the Elliott-Larsen Civil Rights Act (“ELCRA”) to include protections for LGBTQ individuals.

ELCRA Amendments

Originally enacted in 1977, ELCRA currently prohibits employment discrimination based on religion, race, color, national origin, age, sex, height, weight, and marital status. The bill amends this list of enumerated protected classes to include “sexual orientation” and “gender identity or expression.”

The bill defines “gender identity or expression” as “having or being perceived as having a gender-related self-identity or expression whether or not associated with an individual’s assigned sex at birth.” “Sexual orientation” is defined as “having an orientation for heterosexuality, homosexuality, or bisexuality or having a history of such an orientation or being identified with such an orientation.”

Practical Effect

Sexual orientation and gender identity have been protected under federal law since 2020, when the Supreme Court held that firing individuals because of their sexual orientation or transgender status violates Title VII’s prohibition on discrimination “because of sex.” Sexual orientation has also been protected under Michigan law since July 2022, when the Michigan Supreme Court held that ELCRA’s prohibition on discrimination “because of . . . sex” encompassed discrimination based on sexual orientation. However, while the Michigan Court of Claims previously held that “gender identity” was included under ELCRA’s prohibition on discrimination because of sex, the Michigan Supreme Court did not address gender identity or gender expression.

Therefore, the bill will concretely add “gender identity or expression” to the list of protected classes and will codify the protection of “sexual orientation” under Michigan law.

These amendments are significant because the Michigan Civil Rights Commission will be able to investigate claims of discrimination based on gender identity or expression and bring administrative claims. Furthermore, claims brought under ELCRA are procedurally and substantively different from federal Title VII claims. Unlike under federal law, ELCRA does not require that plaintiffs exhaust their administrative remedies before filing an employment discrimination suit. Also, supervisors can be found individually liable under ELCRA’s employment provision, but they cannot under Title VII.

The bill will take effect 90 days after its enactment. In order to avoid liability, employers should review their policies and procedures regarding discrimination and provide corresponding training to human resources and managerial employees.

Click Here for the Original Article

Michigan “Right to Work” Law Soon to Be Repealed: What Should Employers Do?

Michigan lawmakers have just approved bills that will repeal Michigan’s 2012 right-to-work law for private sector workers, ushering in a new day for labor relations in the state. When the two bills are reconciled and final language approved, Governor Whitmer has indicated she will sign the final bill into law. “Right to work” laws actually prohibit employers from requiring that their employees pay union dues as a condition of employment. In light of this news, what should private sector unionized employers do?

What Is “Right-To-Work”?

Before summarizing the situation, here’s a brief summary of “right-to-work” laws for those unfamiliar with the concept or in need of a refresher. Right-to-work laws generally make it unlawful to require a person to be or become a union member, or to pay union dues, as a condition of initial or continued employment. The name comes from the idea that people should be allowed to work without having to financially support organizations or causes that they do not morally support.

Proponents of such measures believe that they create jobs by attracting new employers to a business-friendly environment. Union advocates, on the other hand, argue that union-represented employees should share the cost of union representation.

Right-to-work laws do not prevent people from joining or supporting unions, they just prohibit requiring them to do so. In other words, they do not block those who want to join or support a union, but simply allow employees to make an individual choice about membership and financial support.

Michigan Bucks the Trend

Recent years have seen a resurgence of right-to-work laws across the country, especially in the Midwest. Indiana started the flurry of right-to-work adoption in 2012 by becoming the 23rd right-to-work state in the country, the first state to enact such a law in 12 years. That set off a chain reaction for the next few years, as Michigan (2012), Wisconsin (2015), West Virginia (2016), Kentucky (2017), and Missouri (2017) also enacted right-to-work laws.

But with Michigan’s Senate approving a bill on March 14 and the House having done so last week, the state will now reverse the trend and jettison its right-to-work law. It is currently contained within Michigan’s Employment Relations Commission Act – but not for long.

The statutory language approved by both sets of Michigan lawmakers state that neither ERCA or any local governmental law or policy can “prohibit or limit an agreement that requires all bargaining unit employees, as a condition of continued employment, to pay to the labor organization membership dues or service fees.”

Once the two versions are harmonized and a final bill is presented to the Governor for signature, the right-to-work law will be officially repealed 90 days later.

What Should Michigan Private Employers Do?

As a result, private sector unionized employers should review the union security clauses within their current collective bargaining agreements with their labor law counsel. You will need to determine if the repeal will have any effect on your operations.

Further, if you are in the process of or about to negotiate new CBAs, you should be prepared for the union to demand a union security clause requiring union membership as a term of continued employment. Note however, that “closed shop” clauses (i.e., immediate union membership is a requirement to be hired) are illegal under section 8(a)(3) of the National Labor Relations Act. For this reason, private unionized employers should work with labor counsel to ensure any such union security clause does not run afoul of the NLRA.

Click Here for the Original Article

Iowa Passes a Comprehensive Data Protection Law

Iowa’s House and Senate have unanimously voted to approve a new and comprehensive privacy act (“the Bill“). In the absence of a comprehensive federal data protection framework, Iowa will be the sixth state to enact a state privacy law, following California, Virginia and Colorado, Connecticut and Utah.

The Bill is materially similar to the abovementioned laws, and to Utah‘s recently enacted data protection law in particular.

The Bill would apply to all entities that conduct business in Iowa, or produce a product or service that is targeted at Iowa’s residents, and meet the following thresholds during a calendar year:

  1. the entity controls or processes personal data of over 100,000 Iowa residents; or
  2. the entity controls or processes personal data of over 55,000 Iowa residents andderives over 50% of its gross revenue from the “sale” of personal data.

Certain financial and health institutions as well as non-profit organizations and institutions of higher education are exempted from the Bill. In addition, the Bill excludes employment data and publicly available personal data, such as data made widely public by the data subject, from its ambit.

Below are some of the key highlights of the Bill:

  • Consumer rights: the Bill provides consumers with the rights of access, deletion, portability and the right to opt-out of sale. The Bill requires controllers to respond to consumer requests within 90 days(unlike the standard 45 days period in other states), with an option to extend such period by additional 45 days when reasonably necessary, taking into consideration the complexity and number of requests handled. Controllers are also required to put in place a procedure to allow consumers to appeal in case of rejection of their requests and to notify consumers of their right to further appeal to the Attorney General.
  • Discrimination: the Bill prohibits controllers from processing personal data in violation of state and federal laws prohibiting unlawful discrimination. Moreover, consumers shall not be discriminated against for exercising any of their rights under the Bill. Discrimination includes denial of service and differences in pricing or quality of services or products. Having said that, Data controllers may offer a different price, rate, level or quality of goods or services if the consumer exercises their right to opt out of sale or where the offer is related to the consumer’s voluntary participation in loyalty, rewards, discounts, club card or similar programs.
  • Permitted use: the provisions of the Bill shall not prevent controllers and processors from processing personal data for certain purposessuch as conducting internal research, product improvement and development, establishing legal claims, performing internal operations, security and integrity etc.
  • Sensitive data: prior to the processing of sensitive data, controllers are required to provide consumers with a notice and an opportunity to opt outof the use of sensitive data (which includes, inter alia, health, biometric and precise geolocation data).
  • Enforcement: The Bill does not create a private right of action and can only be enforced by Iowa’s Attorney General. The Bill includes a non-sunsetting 90 days’ cure period, requiring the Attorney General to issue a written prior notice of any alleged violation and allowing the controller or processor to cure such violation before taking any action. Each violation of the Bill could lead to fines of up to $7,500 per violation.

The Bill is expected to be approved shortly by the governor. Once signed into law, the Bill is expected to enter into force on 1 January 2025.

Click Here for the Original Article

Colorado Privacy Act Rules Finalized; To Be in Effect July 1

On March 15, 2023, after five public input sessions, a rulemaking hearing, and over 130 written comments, the Colorado Privacy Act (“CPA”) rules were officially finalized when the Colorado Attorney General’s Office completed its review and submitted them to the Secretary of State. The final rules will be published later this month and go into effect on the same day as the statute, July 1, 2023.

There are certainly areas where the CPA rules align with the California Consumer Privacy Act (“CCPA”) (as amended by the California Privacy Rights Act, or “CPRA”). That said, there are also several areas with material differences, requiring companies to treat California and Colorado consumers and their data differently, take the highest level of harmonization approach (where possible), or make risk-based decisions to follow one state’s requirements more prescriptively than the other. The CPA rules go into greater detail about topics like profiling and automated decision-making (where we are waiting on CCPA regs) and data protection assessments. However, California has said it will consider the Colorado standards when setting California standards on these topics. There are regs concerning Global Privacy Control, called the “universal opt-out mechanism” in the Colorado rules, that align to some extent with the CCPA’s regs. The privacy notice requirements in the final CPA rules align more closely with the CCPA’s requirements than they did in prior drafts, though there are material differences that will likely require many companies to update the privacy notices (that were already updated for January 1) between now and July 1.

Obligations on covered businesses are extensive, especially for data protection assessments. To summarize, controllers subject to the CPA will have to conduct data protection assessments when engaging in processing activities that present a heightened risk of harm to consumers. Generally, assessments must carry out a risk-benefit analysis and discuss safeguards that will be taken to offset the risks. The Virginia Consumer Data Protection Act (“VCDPA”) and Connecticut’s Public Act No. 22-15 (known as the “CTPA”) also have data protection assessment obligations, but the CPA rules go further and obligate twelve explicit inquiries that must be discussed, with an additional twelve that are required if the activity in question is profiling. Look for a blog post soon of the specifics of what will be required of privacy impact assessments.

US Privacy Legislation Landscape

To recap, the CPA is one of five state privacy laws (six, if Iowa’s governor signs its Consumer Data Privacy Bill into law, which was passed by the state’s legislature yesterday, later this month, and seven if you count Nevada’s online privacy and data broker law). Other than California, Colorado is the only state with a mandate for regulations to detail consumer privacy law implementation.

As states develop a patchwork of consumer privacy laws, there have been calls for a single federal standard. Last year there was a federal bill that was introduced in the House of Representatives on June 21, 2022 and amended by the Committee on Energy and Commerce on December 30, 2022, called the American Data Privacy and Protection Act (“ADPPA”) which initially seemed like it was getting some traction. However, the ADPPA failed to come to a vote in the full House of Representatives, and it is unclear what will happen with the bill in light of the new Congress. We previously reported on the ADPPA here.

As for California’s privacy regulations, the California Privacy Protection Agency (“CPPA”) voted last month to send the final proposed text of regulations to the Office of Administrative Law (“OAL”) to review and approve or reject the regulations, which were sent to the OAL on February 14, 2023. The proposed regulations have not yet been approved by the OAL, but given the 30-business-day timeline to which the OAL is subject, its approval will likely happen in the very near future. In the meantime, the CPPA has initiated preliminary rulemaking activities on the topics of cybersecurity audits, risk assessments, and automated decision-making. Public comments are being accepted now through March 29, 2023 on these topics.

Click Here for the Original Article


Illinois Supreme Court Holds New BIPA Cause of Action Accrues with Each Violation

The Illinois Supreme Court recently held that a separate claim accrues under the Illinois Biometric Information Privacy Act each time a private entity scans or transmits an individual’s biometric identifier or other protected information in violation of section 15(b) or (d) of BIPA.

A copy of the opinion in Cothron v. White Castle System, Inc. is available at: Link to Opinion.

A manager for a fast food restaurant chain brought a class action suit in federal court against the restaurant chain on behalf of a putative class of employees who allegedly scanned their fingerprints to access their paystubs and company computers. The manager alleged that the restaurant chain unlawfully collected her alleged biometric information and disclosed it to its third-party vendor in violation of sections 15(b) and (d) of BIPA.

The restaurant chain filed a motion for judgment on the pleadings, arguing that the manager’s claims were untimely because they first accrued when BIPA went into effect in 2008, more than 10 years before the complaint was filed. The manager responded by arguing that a new claim accrued each time she scanned her fingerprints, and the restaurant chain sent her biometric data to its third-party authenticator.

The federal trial court agreed with the manager and denied the restaurant chain’s motion. The trial court later certified its order for immediate interlocutory appeal, finding that its decision involved a controlling question of law on which there was substantial ground for disagreement.

The U.S. Court of Appeals for the Seventh Circuit accepted the certification and found the parties’ competing interpretations of claim accrual reasonable under Illinois law and thus certified the following question to the Illinois Supreme Court:

“Do section 15(b) and 15(d) claims accrue each time a private entity scans a person’s biometric identifier and each time a private entity transmits such a scan to a third party, respectively, or only upon the first scan and first transmission?”

The Illinois Supreme Court began by noting that section 15(b) of BIPA mandates informed consent from an individual before a private entity collects biometric identifiers or information. Specifically, section 15(b) provides that “[n]o private entity may collect, capture, purchase, receive through trade, or otherwise obtain a person’s or a customer’s biometric identifier or biometric information unless it first” obtains informed consent from the individual or the individual’s legally authorized representative. 740 ILCS 14/15(b).

After reviewing section 15(b)’s plain language, the Illinois Supreme Court agreed with the manager that “collect” in the context of the statute means “to receive, gather, or exact from a number of persons or other sources” and “capture” means “to take, seize, or catch.” Webster’s Third New International Dictionary 334, 444 (1993).

Additionally, the Court disagreed with the restaurant chain that these were things that could happen only once; the restaurant chain obtained an employee’s fingerprint and stored it in its database, but the employee was then also required to use his or her fingerprint to access paystubs or company computers. The Court determined that the restaurant chain failed to explain how such a system could work without collecting or capturing the fingerprint every time the employee needed to access his or her computer or pay stub.

Furthermore, the Illinois Supreme Court held that the restaurant chain’s suggestion that the “unless it first” phrase in section 15(b) refers only to the first collection of biometric information was inaccurate because that phrase actually refers to the private entity’s statutory obligation to obtain consent or a release. See 740 ILCS 14/15(b).

Similar to section 15(b), the Court noted that section 15(d) mandates consent or legal authorization before a specific action is taken. It provides that “[n]o private entity in possession of a biometric identifier or biometric information may disclose, redisclose, or otherwise disseminate a person’s or a customer’s biometric identifier or biometric information unless” it obtains informed consent from the individual or their legal representative or has other legal authorization to disclose that information. 740 ILCS 14/15(d).

As with section 15(b), the Illinois Supreme Court concluded that the plain language of section 15(d) applies to every transmission to a third party. The Court reasoned that it did not need to specifically determine the meaning of “redisclose” in section 15(d) because the other terms in that section are broad enough to include repeated transmissions to the same party. “Disclose” means to “expose to view,” (Webster’s Third New International Dictionary 645 (1993)), and Webster’s gives as an example something happening more than once: “the curtain rises to [disclose] once again the lobby.” Id. The Court pointed out that a fingerprint scan system requires a person to expose his or her fingerprint to the system so that the print may be compared with the stored copy and that this happens each time a person uses the system.

Additionally, the Illinois Supreme Court determined that section 15(d) has a catchall provision that broadly applies to any way that an entity may “otherwise disseminate” a person’s biometric data. “Disseminate” means “to spread or send out freely or widely.” Id. at 656. The restaurant chain asserted that this was something that could happen only once, but the Court found that the chain provided no definitional support for this assertion.

Accordingly, the Illinois Supreme Court answered the certified question by holding that the plain language of sections 15(b) and (d) requires that a claim accrues under BIPA upon each transmission of a person’s biometric identifiers or other protected information without prior informed consent.

Lastly, the Illinois Supreme Court concluded that the statutory language made clear that the Illinois legislature chose to make damages under BIPA discretionary, rather than mandatory. However, the Court invited the Illinois legislature to address any policy-based concerns about potentially excessive damages awards.

Click Here for the Original Article

Blackbaud Settles Data Breach Investigation

Blackbaud, an education software provider, has agreed to pay the US securities regulator $3 million to end claims it made misleading disclosures following a 2020 ransomware attack that compromised the data of over 13,000 customers.

Click Here for the Original Article

Judge rules against Seattle ban on criminal background checks for renters

A portion of Seattle’s “Fair Chance Housing” ordinance was overturned by the 9th Circuit Court of Appeals Tuesday after a federal judge ruled it unconstitutionally violated free speech protections.

The 2017 provision prevented landlords from asking prospective tenants about their criminal history when selecting possible renters or refusing to rent to them for their criminal record.

The city council passed the ordinance as a way to reduce barriers to housing due to more renters of color being denied housing due to their criminal history.

The ordinance stated that landlords will be barred from excluding people with criminal records in their ads. It also means they can’t ask about criminal history during the application process, and they cannot reject tenants based on their records.

An exception was made that allowed property owners to still refuse to rent to registered sex offenders. And landlords who share part of their home with a tenant would be allowed more discretion in picking who they live with.

For then-Council President Bruce Harrell, the ordinance was an opportunity for the formerly incarcerated to move forward with their lives without fear of being denied housing.

“They’ve paid their debt,” Harrell said at its passing. “That is the bottom line. They have paid their debt to society. They’re looking to be evaluated on their strengths and not on a decision or an unfortunate circumstance they found themselves in years ago.”

In 2018 a lawsuit was filed against the “Fair Chance Housing” ordinance by a group of landlords saying that their free speech due-process rights were being violated. Their challenging attorneys argued landlords have a right to ask questions that impact their tenants’ safety.

Ultimately the court advised in favor of something called “rational basis review,” making it so landlords opposing the ordinance must prove it doesn’t serve government interests.

Now the U.S. Court of Appeals for the Ninth Circuit ruled Tuesday that the government cannot prevent landlords from asking about applicants’ criminal histories when selecting tenants.

“The Ninth Circuit’s decision recognizes that the First Amendment protects the right to ask questions and receive information relevant to our livelihoods,” said Ethan Blevins, an attorney at Pacific Legal Foundation, the law office representing the landlords. “The government does not get to decide what information people can or cannot possess.”

The judges ruled that a portion of the ordinance was unconstitutional and overly broad.

“A complete ban on any discussion of criminal history between the landlords and prospective tenants—was not in proportion to the interest served by the Ordinance in reducing racial injustice and reducing barriers to housing,” the ruling stated.

Discrimination against those with criminal records is still illegal under the Washington Law Against Discrimination (WLAD,) but the ruling allows landlords to ask about criminal records as long as they do not deny housing only based on the record.

Click Here for the Original Article


EU/US: EDPB Welcomes Improvements in the EU-US Data Privacy Framework, but Challenges Remain

The European Data Protection Board (“EDPB” or the “Board”) on 28 February 2023, released its non-binding opinion on the draft adequacy decision underlying the EU-US Data Privacy Framework (“DPF”). The Board welcomed the “substantial improvements” to US law concerning signals intelligence gathering of data, such as the introduction of the principles of necessity and proportionality and the new redress mechanism for EU data subjects. While it expressed some discrete areas of concern, the EDPB explicitly emphasized that it “does not expect the US data protection framework to replicate European data protection law”.


In an effort to address the widespread legal uncertainty that has prevailed with respect to transatlantic data transfers since the Schrems II decision by the Court of Justice of the European Union (“CJEU”) in July 2020, President Biden on 7 October 2022 issued Executive Order 14086 on Enhancing Safeguards for United States Signals Intelligence Activities (“EO 14086”). In particular, EO 14086 directed US intelligence agencies to take steps to implement US commitments under the new DPF.

Based on those US commitments, the European Commission concluded in its draft US adequacy decision of 13 December 2022 that companies certifying compliance with the DPF Principles can provide EU data subjects with a level of data protection essentially equivalent to that provided in the EU. In connection with the final adoption of that draft adequacy decision, the European Commission requested the non-binding opinion of the EDPB.

The EDPB Opinion

Commercial protection of data

  • DFP Principles—While the EDPB welcomed several updates to the DPF Principles to which participating organisations must legally adhere, it nevertheless stated that further improvement or clarification would be beneficial with respect to data subjects’ rights to access, the absence of key definitions, the application of the DPF Principles to processors, and the broad exemption for publicly available information.
  • Onward Transfers—The EDPB requested that the European Commission “clarify that the safeguards imposed by the initial recipient on the importer in the third country, are effective in light of third country legislation, prior to an onward transfer in the context of the DPF.”
  • Automated Decision-Making and Profiling—The EDPB welcomed the European Commission’s references to specific safeguards provided by relevant US law. However, the EDPB concluded that, given that the level of protection for individuals set forth in such laws varies “according to which sector-specific rules—if any—apply to the situation at hand,” specific rules concerning automated decision-making are needed to ensure sufficient safeguards.

Government protection of data

  • Necessity and proportionality—While the EDPB recognised that EO 14086 introduced the concepts of necessity and proportionality in the legal framework of signals intelligence, it “underlined the need to closely monitor the effects of these amendments in practice, including the review of internal policies and procedures implementing the EO’s safeguards at agency level” (emphasis added).
  • Data Protection Review Court—The EDPB similarly recommended that the European Commission continuously monitor whether the redress mechanism provided for in EO 14086 and its supplemental provisions (, those designed to foster the Data Protection Review Court) are implemented fully and functioning effectively in practice(emphasis added).
  • Bulk data collection—The EDPB identified the collection of bulk data pursuant to Executive Order 12333 as a particular ‘deficit’ in the DPF, as there is no requirement of prior authorisation by an independent authority or “systematic independent review ex post by a court or an equivalently independent body.”
  • FISA 702—With respect to prior independent authorisation of surveillance conducted pursuant to Section 702 of the Foreign Intelligence Surveillance Act (“FISA”), the EDPB lamented that “the FISA Court (‘FISC’) does not review a programme application for compliance with the EO 14086 when certifying the programme authorising the targeting of non-US persons, even though the intelligence authorities carrying out the programme are bound by it.”

What Next?

With the release of non-binding decisions on the draft adequacy decision by the EDPB on 28 February 2023 and by the European Parliament’s LIBE Committee on 14 February 2023, the European Commission will next seek formal approval of the draft decision from at least 55 percent of EU Member State representatives.

Assuming that the committee of EU Member State representatives approves the draft decision, the European Commission reportedly intends to adopt a final adequacy decision by July 2023, the third anniversary of the Schrems II ruling. As with the UK adequacy decision, the EDPB opinion calls for any US adequacy decision to be subject to regular reviews and monitoring by the European Commission.

Click Here for the Original Article

Business and Human Rights – the UK Government publishes new guidance on tackling modern slavery in Government Supply Chains

In what marks its latest move to tackle modern slavery, on 10 February 2023, the UK Government published its new guide for commercial and procurement professionals, entitled “Tackling Modern Slavery in Government Supply Chains” (the “Guidance”). The Guidance is aimed at helping procurement and commercial practitioners at all levels who are operating in government comply with their statutory obligations in respect of modern slavery. It builds on the UK Government’s “Slavery and human trafficking in supply chains: guidance for businesses” and its modern slavery statement Progress Report.


In 2015, the UK Government introduced the Modern Slavery Act (the “MSA”) to clarify modern slavery offences, toughen penalties for those committing such offences, and increase support for victims. Under Section 54 of the MSA, companies with an annual turnover of £36 million or more are required to produce an annual statement that sets out the steps they are taking to address the risk of modern slavery in their operations and supply chains.

The Guidance has been designed to primarily help public sector actors comply with their supply chain obligations, but it may also be relevant and useful to private sector actors. For further information on the MSA, read our earlier blog posts here and here.

The Guidance

The Guidance sets out four key areas of activity for the effective management of modern slavery risk and a number of accompanying recommended actions to help procurement and commercial practitioners who are operating in government comply with their obligations under the MSA:

  1. Identify and manage risks in new projects – practitioners should:
    • review and amend operating procedures, processes and any related documentation in line with the Guidance;
    • assess modern slavery risks in new procurements using Table 1 of the Guidance, which sets out a number of characteristics to help identify procurements that may be at a higher risk (including the nature of the workforce, the context in which the supplier operates, and the commodity type);
    • design new procurements in line with the associated risk level, including (if appropriate) application of the Social Value Model, which is a tool that provides a menu of social value outcomes for commercial teams to review and select with their stakeholders; and
    • review and amend contract management processes and any related documentation in line with the Guidance.
  2. Assess existing contracts – practitioners should:
    • carry out a risk assessment on existing contracts;
    • conduct supply chain mapping exercises if a supplier is deemed as medium or high risk and is unable to provide assurances regarding the systems and processes that they have in place to manage risks effectively;
    • invite suppliers to complete the Modern Slavery Assessment Tool (if appropriate), which generates tailor-made recommendations based on an organisation’s responses to a number of questions; and
    • apply strengthened contract management to manage risks and work with suppliers to improve, including through the use of supplier meetings, key performance indicators (“KPIs”) and audits.
  3. Take action when victims of modern slavery are identified – practitioners should:
    • work openly and proactively with suppliers to resolve modern slavery issues and change problematic working practices, which should include consideration of how to remediate workers involved, how to review a supplier’s policies/ systems to prevent future incidents occurring and how to introduce independent grievance mechanisms; and
    • consider terminating contracts with suppliers only as a last resort.
  4. Training – practitioners should raise awareness of modern slavery and human rights abuses amongst commercial and procurement staff involved in letting and managing contracts, whilst also delivering/making available appropriate training.

The UK Government’s publication of the Guidance follows the increasing pressure it is under to take action to address human rights abuses in the supply chains of both public and private sector actors. For example, it has faced legal action in respect of its failure to launch investigations in relation to the importation of cotton manufactured by forced labour in Xinjiang. The UK Government has also faced legal action in respect of its use of a Malaysian company accused of using forced labour as a supplier of personal protective equipment to its National Health Service. Further, there have been calls from investors and other stakeholders for the UK Government to introduce mandatory human rights and environmental due diligence legislation (for further information on this, read our earlier blog posts here and here).

Significance for private sector actors

Although the Guidance is primarily aimed at public sector actors who engage in procurement activities, the contents will be helpful to private sector actors who wish to develop a more consistent approach to comply with their supply chain obligations under the MSA.

Furthermore, the Guidance highlights the UK Government’s heightened expectations relating to the identification and management of modern slavery risk, which may prove challenging for companies with less sophisticated programmes. By way of example, the Guidance specifies that high risk new procurements bidders should detail their supply chain members and submit self-declarations for each of those supply chain members, whilst also setting out the following steps (based on the Social Value Model referred to above) that practitioners can take to mitigate modern slavery risk when engaging with new procurement bidders:

  1. Model Evaluation Questions: practitioners should ask a number of questions of potential new procurement bidders, including:
    • where subcontractors are used, how the supply chain will be managed and monitored for modern slavery risks and their action plans for tackling cases as they arise;
    • details of workforce conditions in factories used to produce goods to be delivered under the contract, including the workforce’s wages, working hours and rest breaks;
    • information on their working practices relating to the staff who will be assigned to perform the contract and to demonstrate their approach to tackling modern slavery/human rights abuses which might arise amongst those staff;
    • who in the company oversees the modern slavery risk and responsibility arising in relation to the goods/services to be delivered under the contract – who monitors it and how frequently and what resources are available to identify, manage, mitigate risks; and
    • evidence of the recruitment methods used for staff delivering the contract.
  2. Model Award criteria – practitioners should evaluate the responses to these questions against a Model Award Criteria prior to awarding any contract. This Model Award Criteria may take the form of the following scoring system:
    • Excellent: the responses exceed what is expected for the criteria and leaves no doubt as to the capability and commitment to deliver what is required;
    • Very good: the responses meet the required standard in all material respects. There are no significant areas of concern, although there may be limited minor issues that need further exploration or attention later in the procurement process;
    • Good: the response broadly meets what is expected for the criteria. There are no significant areas of concern, although there may be limited minor issues that need further exploration or attention later in the procurement process;
    • Poor: the response meets elements of the requirement but gives concern in a number of significant areas; and
    • Fail: the response completely fails to meet the required standard or does not provide a proposal.
  3. Reporting Metrics – practitioners should use numerical outputs related to how the supplier will deliver the quantitative aspects of social value under the potential contract. This should include consideration of, for example, the number of full-time equivalent employment opportunities that are created by the supplier in the contract supply chain in the performance of the contract.
  4. Possible KPIs – practitioners should ensure KPIs are in place to monitor progress against managing modern slavery risks throughout the contractual relationship, such as:
    • requiring commercial and frontline staff to complete annual training on modern slavery;
    • requiring new staff to be trained on modern slavery within six months of joining the organisation;
    • reporting any suspected modern slavery violations to an Executive Director immediately upon detection and investigating the reports within 48 hours; and
    • completing a given number of supply chain audits.

For further assistance on how to effectively manage modern slavery risk, the Guidance directs practitioners to the following additional resources:

Click Here for the Original Article

EU Makes Progress on EU-US Data Adequacy and Incoming Data and Cybersecurity Legislation

What’s the issue?

The European Commission has been working on overhauling aspects of the EU data and cybersecurity regimes with a particular focus on access to and protection of non-personal as well as personal data, and IoT products. More pressingly, it has also been working to resolve the issue of transfers of personal data from the EU to the USA by setting up a new framework to replace the Privacy Shield.

What’s the development?

In the last few weeks we’ve seen progress on a number of fronts:

EU-US adequacy

At the end of February, the European Parliament’s LIBE Committee refused to back the draft EU-US adequacy decision, saying the Data Protection Framework (DPF) does not provide EU citizens with a level of data protection equivalent to that in the EU. The Committee urged the Commission to renegotiate, however, its opinion is not binding on the Commission as its part in the adoption process is limited to the right to scrutiny.

The European Data Protection Board’s views, while also non-binding, arguably carry more weight and in early March, it adopted its Opinion on the adequate protection of personal data under the EU-US Data Privacy Framework (DPF). In summary, the EDPB welcomes improvements as compared with the Privacy Shield, in particular, the recognition of the principles of necessity and proportionality, and the enhanced oversight and redress regime. However, it also expresses a number of concerns, recommending the Commission seek clarification, and underlines that the effectiveness of the framework will depend on the extent to which it is followed through in practice. In other words, the EDPB is not dismissing the DPF nor seeking to block the EU-US adequacy decision, but neither is it giving unqualified support.

General data protection aspects

The EDPB comments that the DPF Principles have not changed significantly from those under the Privacy Shield. As a result, some of the issues of concern under the Privacy Shield remain, including those relating to the rights of data subjects, the absence of key definitions, lack of clarity around application to processors and a broad exemption for publicly available information. The EDPB is also concerned about protections for onward transfers, and the fact that protections around automated decision-making, profiling and AI technologies tend to be sector specific. The EDPB says in these areas, rules are needed to provide sufficient safeguards, including the right for the individual to know the logic involved, to challenge the decision, and to obtain human intervention where the decision significantly affects them.

The EDPB stresses the importance of effective oversight and enforcement and underlines the need for compliance checks. The EDPB says it will be monitoring these aspects, and the effectiveness of redress mechanisms (many of which are the same as those in the Privacy Shield) closely, including in the context of periodic reviews.

Access to EU personal data by US public authorities

The EDPB recommends that the Executive Order 14086 (EO) be accompanied by updated policies and procedures across all US intelligence agencies. It recommends the Commission assess these and share their assessment with the EDPB. The EDPB says the EO represents a “significant improvement” by introducing additional safeguards and the concepts of necessity and proportionality into the US legal framework on signals intelligence. It also finds that the proposed redress mechanism for EU citizens alleging unlawful use of their data by US public bodies, to be “significantly improved” compared with the Ombudsperson mechanism under the Privacy Shield. However, the EDPB sees a need for further clarification on questions in particular relating to “temporary bulk collection” and to the further retention and dissemination of bulk collection data.

The EDPB’s focus is on the holistic approach to the safeguards and it raises a number of concerns about particular aspects of the US bulk data collection regime under FISA and Executive Order 12333. It also raises concerns about the practical functioning of the Data Protection Review Court which, it says, will require monitoring by the Commission to ensure it is not routinely dismissing claims.

Review and monitoring

The EDPB concludes that the EO provides “substantial improvements” compared to the previous framework but asks for its concerns to be addressed and for the Commission to provide requested clarifications and ongoing monitoring of the implementation of the DPF and the safeguards it provides. It also says it expects the Commission to stick to its commitment to suspend, repeal or amend the adequacy decision on grounds of urgency if necessary.

Full steam ahead?

Despite expressing some concerns, the EDPB’s Opinion does not contain anything likely to hold up a new EU-US adequacy decision and, notwithstanding the disapproval of the European Parliament’s LIBE Committee, we should see the decision approved shortly.

Data Act

The European Parliament has agreed its negotiating position on the EC’s draft Data Act. Suggested amendments include:

  • clarifications of the types of data in scope
  • strengthening trade secret protection for data holders
  • clarifying provisions on switching cloud providers
  • extending the fairness check which prevents large companies from imposing unfair contractual terms to all companies regardless of size
  • clarifying what constitutes a public emergency allowing public bodies to request access to privately held data and allowing for fair remuneration for that access
  • giving the European Data Innovation Board a role in coordinating enforcement of the Regulation.

Once the Council agrees its final position, trilogues will begin.

Cyber Resilience Act

A new compromise text for the Cyber Resilience Act has reportedly been published by the Swedish presidency of the EU Council. Suggestions are that proposed changes are not particularly significant and that some of the more controversial elements have not yet been amended. There is clarification on interaction with the AI Act and General Product Safety Regulation, a new article mandating Member States to put appeal procedures in place for product manufacturers to challenge the decision of accredited auditors, and clarification around categories of penalties.

European Commission to propose legislation to harmonise GDPR enforcement

The European Commission published a call for evidence at the end of February regarding its plans to introduce legislation to further harmonise GDPR enforcement by national regulators. The legislation is likely to harmonise administrative procedures and cooperation mechanisms for cross-border cases. The call closes on 24 March.

What does this mean for you?

These are EU developments but, once concluded, they are likely to have a significant impact on relevant cross-border UK businesses which will also have to get to grips with a new UK data protection regime as we discuss here and other parallel initiatives to EU proposals. The UK is also progressing a data bridge to facilitate frictionless data flows between the UK and the US, and will be keen to keep up with if not outrun the EU in reaching an adequacy arrangement with the US.

Click Here for the Original Article

Canadian Employment Law Trends: Looking Back and Moving Forward

In 2022, the Canadian employment law landscape continued to evolve. We have summarized some of the most noteworthy developments from last year to help you stay up-to-date and share our outlook of which trends will likely continue throughout 2023.




  • In June 2022, British Columbia amended its Labour Relations Code to permit the automatic certification of workplaces where 55% or more of employees sign union membership cards. A vote will still be held where between 45-55% of employees in a workplace have signed cards.
  • In December 2022, the Canada Labour Code’s(CLC) new paid medical leave provisions came into force, providing employees in federally regulated workplaces with access to a maximum of 10 days of paid medical leave per calendar year. These paid sick days are in addition to the much longer unpaid sick leave entitlements that are available under the CLC. For more information, please read our November 2022 Blakes Bulletin: Federal Employees to Accrue New Paid Medical Leave Days Starting December 1, 2022.


  • The Alberta Court of Appeal (ABCA) released a significant decision on constructive dismissal in Kosteckyj v. Paramount Resources Ltd. In that case, the ABCA emphasized that employees will need to act quickly when they are faced with unilateral changes to their terms and conditions of employment. Otherwise, they will be seen to have consented to the changes at issue and will be hard-pressed to make successful constructive dismissal claims.



  • The 2021 Supreme Court of Canada decision in Northern Regional Health Authority v. Horrocks clarified the legal tests that apply to determine the circumstances in which labour arbitrators and human rights authorities will have concurrent or exclusive jurisdiction over claims. Developments in that regard continued following the case, such that the question of jurisdiction remains a nuanced issue and needs to be considered in light of the legislative regime in each province.



We expect that employment and labour law will continue to develop in meaningful ways over the coming year. We expect that:

  • Remote and hybrid work will continue to pose challenges to employers in terms of workplace culture, productivity and employee flexibility
  • Termination clauses in employment agreements will continue to be litigated, with potentially inconsistent results
  • Employers will ensure respectful workplaces by taking steps to mitigate the risk of employee misconduct complaints and appropriately addressing those that arise
  • Quebec employers must stay vigilant and aware that the labour tribunal continues to use its power to reinstate dismissed employees; and further guidance is expected from the Quebec regulator on the details of language compliance matters
  • Increased restructuring will lead to additional case law on notice periods and constructive dismissal.

Click Here for the Original Article

Ontario, Canada Proposes ESA Amendments Relating to Remote Workers and New Hires

On March 13, 2023, Ontario announced that it is proposing two amendments to the Employment Standards Act, 2000 (ESA) and related regulations.

Employees Who Work Solely from Home to Become Eligible to Receive Enhanced Notice in Context of Mass Termination  

Ontario proposes to make employees who work solely from home eligible to receive enhanced notice in the context of a mass termination by expanding the ESA definition of “establishment” to include employees’ remote home offices. Under the ESA, in the context of an individual termination, an employee whose employment is terminated after five years of service is entitled to five weeks’ paid notice or pay-in-lieu.  If, however, an employee’s employment is terminated in the context of a “mass termination” they are entitled to enhanced notice.

A “mass termination” occurs when the employment of 50 or more employees is terminated at an employer’s “establishment” within a four-week period.  In such circumstances, an employee would be entitled to a minimum of eight weeks’ notice; however, the number of weeks’ notice an employee is entitled to receive depends on the number of employees affected:

Notice Required Number of Employees Affected
At least 8 weeks 50 to 199
At least 12 weeks 200 to 499
At least 16 weeks 500 or more

If the proposed amendment to the ESA definition of “establishment” is passed, employees who work solely from home would be required to receive the same enhanced notice as “in office” and other employees in the event of a mass termination.

New Hires to be Provided Information in Writing About Their Jobs

Ontario’s announcement also indicates that proposed regulatory changes would require employers to provide new hires with information in writing about their jobs, such as pay, work location and hours of work, and the date by which that information must be provided (e.g., before their first shift).  At the moment, the ESA requires employers to share with new employees only the latest version of the employment standards poster, which outlines ESA workplace rights and responsibilities.

It remains to be seen whether these proposed changes will be passed via the legislative process once a Bill is introduced.  We will follow developments as they unfold and provide updates when information becomes available.

Click Here for the Original Article

Individuals are Entitled to Know the Specific Recipients of their Personal Data

The Court of Justice of the European Union (the “CJEU”) has held that controllers must inform data subjects of the actual identities of the recipients of data subjects’ personal data when responding to a data subject access request.

Article 15(1)(c) GDPR states that the controller must provide information on the recipients or the categories of recipient to whom personal data have been or will be disclosed.

The CJEU held in Case C-154/21 that the actual identities of recipients should be disclosed except where the data subject elects to request information concerning the categories of recipients only or where the controller can demonstrate that:

  • it is impossible to provide the identities of recipients (for example, if the recipients are not yet known); or
  • the request is manifestly unfounded or excessive (in particular due to the repetitive nature of the requests from the data subject to the same controller).

This judgment will increase the operational burden on controllers when responding to data subject access requests. From this point on it would be sensible for controllers to create and maintain a list of the recipients that personal data is disclosed to.

“…the data subject must have, in particular, the right to be informed of the identity of the specific recipients where his or her personal data have already been disclosed”

Please see:

Click Here for the Original Article

UK Data Reform is back Data Protection and Digital Information Bill (no2) is laid to Parliament

n July 2022 the Data Protection and Digital Information Bill (the original Bill) was introduced into Parliament and we finally got sight of the UK Government’s intended direction for data protection post Brexit. Please see our blog here on the key areas of reform and implications for issues such as EU adequacy.

The original Bill was then withdrawn in September 2022, as the government transitioned to the policy priorities of new Prime Minister Liz Truss, and then Rishi Sunak. The new Digital, Culture, Media and Sport (DCMS) Secretary of State, Michele Donelan, announced that the government was seeking go further in its reform of GDPR and a period of consultation opened with stakeholders; a series of meetings as opposed to a fresh formal consultation (which had already taken place in 2021 under the banner, Data: a New Direction). This stakeholder consultation involved a Business Advisory Group including the Data and Marketing Association, the Advertising Association, Which? TechUK and other stakeholders.

The policy responsibility for data protection also moved to the newly formed Department for Science, Innovation and Technology (DSIT) in February 2023. The DSIT announced its 5 priorities for 2023, which unsurprisingly include: “Deliver key legislative and regulatory reforms to drive competition and promote innovation, including the Data Protection and Digital Information Bill, the Digital Markets, Competition and Consumer Bill and our pro-innovation approach to regulating AI”.

Where are we now?

A new Data Protection and Digital Information Bill (no2) (the new Bill) was laid in Parliament on 8 March and the UK reform process is now clearly back in play. The new Bill also comes with a new set of explanatory notes but sadly no official Keeling schedule at this stage (which would present the Bill in the form of redline changes to the UK General Data Protection Regulation, Data Protection Act 2018 and the Privacy and Electronic Communications 2003).

The DSIT press release made the impressive claim that the new Bill would now enable £4.7Bn of savings over the next 10 years. This is a significant figure and no doubt the economic analysis behind it will be subjected to some scrutiny as the new Bill passes through Parliament, particularly as many multi-national companies may not seek to amend their data protection compliance programs if their global programme is centred on EU GDPR. The message from DSIT continues to recognise that EU GDPR compliance will support UK GDPR compliance.

We can expect the new Bill to have its second reading in the next few weeks and looking at the passage of the previous Data Protection Act 2018 the process to Royal Assent may last up to a year. It is also worth noting that a general election is expected in late 2024, so the new Bill will need to be passed before then.

Consideration should also be given to the wider context into which the new Bill arrives. Data protection reforms are underway in countries such as Australia, Canada and India and the UK framework that is created by this new Bill will be viewed in that context, as well as in comparison with EU GDPR.

What has changed?

In this blog we take a look at how the new Bill has changed and some of the likely implications. The headline is that most of the original Bill has been retained and there are a relatively small number of changes.

New provisions

Here are the most notable changes in new Bill:

  • Legitimate Interest. The most significant change is probably the clarification added regarding legitimate interests. Recital 47 of the GDPR has always contained the clarification that “the processing of personal data for direct marketing purposes may be regarded as carried out for a legitimate interest”. The new Bill adds a non-exhaustive list of scenarios where organisations may rely on the legitimate interests lawful basis, including for the purposes of i) direct marketing, ii) transferring data within the organisation for administrative purposes, and iii) ensuring the security of network and information systems. It is important to note that the necessity and balancing tests will still need to be met.

This change elevates the clarification regarding direct marketing from the recital onto the face of the law. This is something the trade body, the Data and Marketing Association, publicly called for. They felt the recital did not give enough confidence to those companies that wanted to use legitimate interests in certain scenarios. Legitimate interests and direct marketing purposes were also the subject of the recent Experian judgment by the First-tier Tribunal, with Experian winning their appeal in relation to a number of grounds, though the ICO has now appealed to the Upper Tribunal.

The Explanatory Notes also clarify that controllers may rely on Article 6(1)(f) to process personal data for other legitimate activities, if the processing is necessary and the balancing test is carried out. This appears to be an indication that controllers can consider other legitimate commercial activities as well. It seems likely that this has been added to try to avoid the uncertainty that exists in the EU, where there is still some uncertainty about whether purely commercial interests can be legitimate interests, and a case from the Netherlands has been referred to the Court of Justice of the EU (CJEU). Also see our previous article on EU case law and enforcement actions.

  • As with legitimate interest, the original Bill moved some elements from the GDPR recitals into the primary legislation, for greater clarity. The new Bill retains the non-exhaustive list of scientific research types, such as applied or fundamental research or innovative research into technological development but also adds firmer language confirming that research may constitute “scientific research” “whether carried out as a commercial or non-commercial activity”. There is now likely to be a significant debate about the breadth of the clarified definition and how companies can benefit (particularly in areas such as AI research and development) as well as any risks to individuals’ privacy that could flows from this.
  • Records of processing activities. Under the new Bill, the record keeping requirement now only bites when the personal data processing is likely to result in a high risk to the rights and freedoms of individuals. This is most likely to be of benefit to UK SMEs who don’t have establishments in the EU.
  • Automated decision-making. There is a further clarification added to the provisions in the original Bill (that replaced Article 22 and moved from prohibitions to conditions). The new Bill now clarifies: “When considering whether there is meaningful human involvement in the taking of a decision, a person must consider, among other things, the extent to which the decision is reached by means of profiling”. This appears to be an additional safeguard to ensure that human involvement is effective in practice and doesn’t continue to amplify existing risks with the data driven decision making.

The Secretary of State can also now add further scenarios where there is (or isn’t) meaningful human involvement. Our previous blog noted the number of areas where the Secretary of State will be able to amend or add to the UK GDPR via secondary legislation and this may add to the debate about the government’s “Henry VIII” powers (ie clauses in a bill that enable ministers to amend or repeal provisions in an Act of Parliament using secondary legislation, potentially shifting power to the executive), particularly on areas related to AI.

  • Existing safeguards for international transfers. Though not a major concern (most practitioners had assumed it would be the case) but the new Bill now makes clear that that existing safeguards for international data transfers will still be lawful once the new Bill becomes law and takes effect. This will include standard clauses (ie the UK’s Addendum to the EU standard contractual clauses and the UK’s International Data Transfer Agreement).

Further implications for adequacy

In our previous blog, we considered the implications for adequacy and concluded that the original Bill did not create a strong risk to the UK’s adequacy status with respect to the EU. That said, we considered that there would be areas where the EU may look closely, such as the independence of the ICO and the changes on international transfers. The new Bill doesn’t change this analysis and DSIT officials speaking recently at the IAPP 2023 UK conference continued to maintain their position that adequacy could be maintained and that they had remained engaged with the European Commission, updating them about their work.

Next up: AI reform

The data protection community is also awaiting the announcement of the government’s long-trailed AI white paper, which should be published in the coming weeks. This will have some important intersections with the new Bill, not least whether the changes to the UK GDPR Article 22 provision on automated decision making (replacing a broad prohibition with the need to meet certain conditions including regarding human intervention) will create the right balance of protections for use of AI.

Click Here for the Original Article

Common Issues in Users’ Personal Information Protection for Companies in China and Establishing a Compliance System

Influenced by Europe’s GDPR, personal information protection legislation has emerged on a global scale. Within this context, China officially promulgated the Cybersecurity Law in November 2016—the first general law in China to comprehensively regulate cyberspace security management issues and address the basic requirements on personal information protection at the legal level. In May 2020, the Civil Code of the People’s Republic of China was officially revised to explicitly list personal information and privacy within the scope of personality rights, which are basic rights enjoyed by all civil subjects (i.e. natural persons, legal persons, and unincorporated associations ). Since then, the Data Security Law and the Personal Information Protection Law, enacted respectively in June and August 2021, have further improved legal protections on personal data in China. Moreover, as the first law to specifically protect personal information, the Personal Information Protection Law provides a fundamental basis for subsequent legislation, judicial and law enforcement in relation to personal information protection. In addition, in the foreseeable future, China will continue to enact laws and regulations, such as the Administrative Regulations on Network Data Security (currently being prepared), to further implement the provisions on personal information protection.

In this context, regulatory and judicial authorities are continuing to strengthen law enforcement efforts against companies in respect of their processing of personal information. In January 2019, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security and the State Administration for Market Regulation jointly issued the Announcement of a Special Crackdown on the Illegal Collection and Use of Personal Information by Apps, clarifying that they will organize a special nationwide investigation of the illegal collection and use of personal information by apps, with Cybersecurity Law, the Law on the Protection of Consumer Rights and Interests, the Personal Information Protection Law as well as other laws and regulations being the legal basis.

Specifically, the competent authorities will supervise and punish the illegal collection and use of personal information, and punish:

  • The compulsory and excessive collection of personal information,
  • The collection and use of personal information without consent of consumers,
  • The failure to take remedies for the occurrence or possible occurrence of data leakage or loss, and
  • The illegal sale and provision of personal information to others.

Relevant penalties mainly include ordering operators of violating apps to rectify the violations within a set time limit. If the app operator fails to rectify the violations within the time limit, the violations may be publicized. In serious cases, the operators of violating apps may also face fines, a suspension of the relevant business, a revocation of relevant business permits or business licenses, or business operation can be halted for rectification, or the relevant business permits or business licenses can be revoked. So far, the authorities have announced that they have found thousands of apps in violation and have taken corresponding measures against the companies that developed and operate them.

Establishing a user personal information protection compliance system

Key points in the processing of user personal information

Different types of personal information have varying impacts on the rights and interests of personal information subjects; therefore, the laws and regulations have different requirements, depending on the type of personal information. In light of this, companies involved in personal information collection activities must identify the types of collected user personal information and identify sensitive personal information, so as to provide the basis for the subsequent implementation of various compliance requirements on personal information protection.

In addition, the relevant laws and regulations propose that the processing of personal information must follow the principle of minimization, i.e., the processing of personal information must be (i) for a specific and reasonable purpose, (ii) directly related to the purpose of the processing, and (iii) the method with the minimum impact on personal rights and interests must be adopted.

Personal information collection must be limited to the minimum scope for realizing the purpose of the processing, and the personal information must not be excessively collected. China’s data protection legislation tends to view as illegal the collection of bulk personal information by bundling various types of services together with other services as well as bulk applications for consent. Therefore, companies must assess, in combination with the various services provided by their products, whether the users’ personal information they collected are for clear and reasonable purposes and are directly related to the purposes for which they handle such information; that is, the collection of the personal information of users is necessary for realizing the business functions of their products.

According to the relevant requirements of laws and regulations, if the collection of personal information is based on the individual’s consent, this consent must be voluntarily and clearly given by the individual on the premise that they have been fully informed.

Prior to the collection of personal information, a personal information processor must—in an eye-catching manner and with clear and understandable language—inform the personal information subject of the following:

  • The name and contact information of the personal information processor as well
  • The purpose of collecting the personal information,
  • The time period of use of the personal information
  • The storage location, and storage period of the personal information, and
  • Personalized recommendation setting of such personal information in a truthful, accurate and complete manner.

If, after the collection of personal information, the purpose of use of the personal information collected needs to be changed due to business needs, the company must inform the subject what specific personal information is involved, the reason for the change, and the purpose of processing after the change, and it must again obtain the express consent of the subject of personal information.

Companies may establish an appropriate lifecycle compliance control system for user personal information by referring to China’s national standard documents, such as the Information Security Technology: Personal Information Security Specification (Standard GB/T 35273-2020) and make appropriate adjustments in light of the actual business situations.

Cross-border transfers of personal information

China’s national data security supervision authority has set up relatively strict process requirements for the cross-border transfer of personal information. In principle, cross-border personal information transfers may only be conducted when the relevant laws and regulations have been satisfied, and it is indeed necessary to transfer the personal information outside China’s borders due to business or other needs. Specifically, cross-border personal information transfers must pass a security assessment organized by the national cyberspace administration authorities, or the transferring company must obtain a certification of personal information protection by a professional institution, in accordance with the regulations of the national cyberspace administration authorities, or the transferring company must sign a contract with an overseas recipient, in accordance with the standard contract developed by the national cyberspace administration authorities to specify the rights and obligations of both parties.

To date, the Chinese regulatory authorities have issued the Measures for Security Assessment of Data Cross-border Transfers and supporting documents, as well as an exposure draft of the Provisions on Standard Contracts for Cross-border Transfers of Personal Information, which companies should pay special attention to when dealing with personal information cross-border transfer issues.

Click Here for the Original Article


NJ Warns Landlord about Rejecting Formerly Incarcerated People

New Jersey’s attorney general told dozens of landlords to stop violating the rights of formerly incarcerated rental applicants, but — as he did last year — stopped short of naming or penalizing them.

The office of Matthew Platkin issued 59 notices of violation to housing providers, reported. The indiscretions range from asking discriminatory questions on rental applications to running advertisements that discourage people with criminal records from applying.

Those actions were banned at the beginning of last year when New Jersey passed the Fair Chance in Housing Law. The legislation bars landlords from asking about criminal history until a conditional offer is made.

Once that offer is made, landlords can then consider specific historical information about the applicant, including convictions that are recent, involve serious crimes or require lifetime sex offender registration.

Landlords are also required to consider information that benefits an applicant, such as letters of recommendation. Landlords who withdraw an offer must explain why, and applicants can appeal.

The law was designed to make it easier for people who were once incarcerated to find housing, which reduces recidivism. Since its passage, however, landlords have been violating the statute in significant numbers, some of them quite openly.

In August, 30 notices were issued for violations but penalties were not imposed, although the legislation allows for them.

Penalties can be significant, particularly for smaller landlords: up to $1,000 for the first offense, $5,000 for the second and $10,000 for each additional one.

Click Here for the Original Article

Exaggerating Job Titles Won’t Magically Transform Employees’ Overtime Eligibility

When did a “Receptionist” become an “Assistant Manager of Reservations?” According to a new research study, employers’ use of exaggerated job titles has increased rapidly over the past decade. The study points to a trend that some employers may be attempting to avoid overtime pay by casting more employees as “managers.” Understanding the intricacies of the federal overtime statute (the Fair Labor Standards Act) and equivalent state wage and hour laws is challenging – but one thing that is easily understood is that job titles alone are not necessarily determinative of whether an employee is eligible for overtime pay under the law.

The FLSA presumes that all employees are eligible for overtime pay unless an employer can demonstrate that one of several applicable exemptions apply to an employee’s position (thus making the employee “exempt”). The FLSA provides exemptions for individuals employed as bona fide executive, administrative, and professional employees – the so-called “white collar” exemptions – among some other exemptions. To qualify for a white collar exemption, employees generally must meet certain tests regarding their job duties and be paid on a salary basis of not less than $684 per week. The Department of Labor, which enforces the FLSA, has made clear that job titles do not determine exempt status. While many managers may qualify as “exempt” from overtime rules under these exemptions, employers should be careful to review each employee’s job duties (both per their job description and in practice) in order to ensure that each employee clearly meets the exemption’s requirements. If an employee or group of employees within a job title are incorrectly classified as “exempt,” the employer could find itself subject to a private lawsuit or enforcement action and the possibility of serious damages and back payments.

It’s not just workers who can exaggerate their job titles — their employers are doing it too, and it’s to avoid paying them for their work. That’s according to a new paper from the National Bureau of Economic Research, which found that companies avoid paying their employees overtime by taking advantage of a loophole in federal labor law, in which managers are paid their fixed salaries even when they work beyond their prescribed hours. That loophole involves misclassifying workers as managers, even if they don’t have actual managerial duties. The researchers highlighted some particularly egregious examples they found, including barbers who are classified as “grooming managers” and front desk attendants who are hired as “Directors of First Impressions.”

Click Here for the Original Article

Maryland Employers Could Face Difficulties Defending Harassment Claims Under New State Laws

The Maryland state legislature has enacted two bills that may make it harder for employers to defend against harassment claims. The two employee-friendly bills, which took effect on October 1, 2022, lowered the legal standard required to establish a harassment claim and extended the period during which a person may file a civil action alleging an unlawful employment practice.

Senate Bill 450 allows employees to establish a claim of harassment based on facts that previously might not have risen to the level of a legally actionable harassment claim. In the past, Maryland courts had applied the “severe and pervasive” standard when evaluating harassment claims, which most jurisdictions traditionally have used.

Under the new legislation, Maryland courts will apply a new “totality of the circumstances” standard, which may permit employees to pursue claims involving less egregious or frequent incidents of harassment than under the prior standard. The new standard also may make it more challenging for employers to win dismissals of harassment claims and easier for aggrieved employees to prevail.

Additionally, SB 450 alters the definition of “harassment” to specifically include sexual harassment and mandates sexual harassment prevention training for state government employees.

Senate Bill 451 tolls all time limitations on bringing civil actions on claims of unlawful employment practices while administrative charges are pending. In other words, it expands the timeframe during which employees must file civil lawsuits based on unlawful employment practices based on the administrative charge process. As a result, employer defenses based on timeliness may be weaker.

Click Here for the Original Article

GDPR for HR Newsletter March 2023 | Data Protection Guidance on Processing Health Data

ICO guidance on handling workers’ health data

The Information Commissioner’s Office’s (ICO) consultation on its draft guidance on handling the health information of workers ended on 26 January 2023. This guidance follows the ICO’s recent consultation on its draft monitoring at work guidance. These consultations are the first part of an ongoing project for the ICO to replace its employment code of practice with new guidance.

The guidance:

  • Reiterates that gathering information about workers’ health is intrusive and is highly intrusive where the information is particularly sensitive. If employers want to collect and use information regarding workers’ health, they need to be very clear about why they are doing so. The ICO notes that, while workers will reasonably expect to share a proportionate amount of health data, they can legitimately expect that their employers will respect their privacy when doing so.
  • Encourages organisations to consider whether there are more targeted ways of collecting health data that would deliver more acceptable outcomes for workers.
  • Reminds organisations to be clear about the purposes for processing health data and make such information available to workers.
  • Reminds organisations that consent is one of the lawful bases for the processing of personal data, but warns that UK law sets a high standard for consent and people must have a genuine choice over how their data is used. As such, it may be difficult for organisations to rely upon consent to process health data about its workers.
  • Recognises that it would be good practice to carry out a data-protection impact assessment before processing health data. This, however, may only be applicable to employers who intend to process health data that is likely to pose a high risk to workers (such as conducting medical tests).
  • Reminds organisations to ensure that appropriate security measures are in place to protect workers’ health information and that access to such information should be restricted as appropriate on a need-to-know basis.

In the news

WhatsApp and DSAR cases

In a recent case FKJ v RVT and others, the High Court refused to strike out a claim for misuse of private information brought by the claimant against her ex-employer, who dismissed her for misconduct, The claim arose from the managing partner having obtained 18,000 of the claimant’s private WhatsApp messages which were used as evidence against her in employment tribunal proceedings. The claimant claimed he had ‘hacked’ into her WhatsApp account to gain access.

The defendants’ strike out application was dismissed. The judge concluded that it was without merit and an attempt to stifle the claimant’s claim. Notably, the defendants’ argument that the claimant’s privacy claim would face significant problems on the merits was rejected, and the judge said that, on the facts, it could not be seriously contested that the claimant had a reasonable expectation of privacy in relation to the WhatsApp messages. A useful case to cite when access to WhatsApp messages is being considered.

In RW v Österreichische Post AG, the European Court of Justice provided clarification on the right of access to personal data and information in relation to data subject access requests (DSARs). The court ruled that the data subject’s right of access to information about the processing of their personal data under article 15(1) of the General Data Protection Regulation must be interpreted as meaning that it will extend, where the data subject requests, to the identification of the specific recipients to whom their personal data are disclosed. Not good news for those on the receiving end of a DSAR!

How is artificial intelligence combatting burnout? ‘We’re bringing in AI in order to bring back humanity’

Grace Kintsugi, co-founder and CEO at Kintsugi, has developed an AI-powered mental health tool platform that can detect depression and anxiety using short audio clips of someone’s speech. The company is launching a three-month pilot scheme with one of the US’ largest health insurance companies enabling employees to leave voice notes – an “audio journal” of their feelings.

UK government reignites data protection reform

The UK government published a second iteration of the Data Protection and Digital Information Bill (the first iteration was published in July 2022 and has now been withdrawn). The bill is designed to reform UK data protection law post-Brexit, and this second iteration makes relatively few substantive changes to the first version (published last summer), although there are some useful changes, including on record-keeping and international transfers and on scientific research. While the bill proposes wholesale changes to the UK’s privacy framework, those can be characterised as an evolution not a revolution. Overall the bill aims to reduce the administrative burden on businesses, promote innovation and reform the Information Commissioner’s Office.

Click Here for the Original Article

2023 HR Budget Briefing

United Kingdom – Despite the Government having made a number of other “fiscal statements” in the last 12 months, today’s Budget still contained a number of new and noteworthy tax and financial support measures affecting employees and their employers. Changes to the taxation of employees’ pension arrangements are particularly significant (especially the abolition of the “lifetime allowance”). The Chancellor also unveiled several new schemes to support individuals in work and amendments to make HMRC tax-advantaged employee share schemes available to more companies and easier to operate.


There was a huge surprise in today’s Budget for pensions savers. Rather than increase the lifetime limit on tax-efficient pensions savings (the subject of intense speculation ahead of the Budget), the Chancellor announced that the lifetime allowance would be completely removed. This is part of a package of proposed changes to pensions tax allowances designed to tackle labour supply issues: encouraging older workers considering retirement to remain in employment and encouraging those who have already left the workforce to return. The proposals are set out in a Pensions Tax Limits policy paper. In summary:

  • Lifetime allowance tax charges will be removed from 6 April 2023, with the lifetime allowance (which currently stands at £1,073,100) being entirely removed from legislation at a later point.
  • The maximum amount of tax-free cash (the pension commencement lump sum) will be retained at the current level of £268,275 from 6 April 2023 and will be frozen going forward. However, individuals with a protected right to take a higher amount of tax-free cash will continue to be able to do so. Where higher lump sums are paid, these will be taxed at the individual’s marginal rate (instead of at 55%) from 6 April 2023.
  • The taxation of other lump sums that are impacted by the lifetime allowance (such as the serious ill-health lump sum and certain death lump sums) will also be changed so that, where they are currently subject to a 55% tax charge over the lifetime allowance, the individual’s marginal rate will apply from 6 April 2023.
  • The annual allowance for tax-efficient pensions savings is set to increase from £40,000 to £60,000 with effect from 6 April 2023. Individuals will continue to be able to carry forward unused annual allowances from the previous three tax years.
  • Where individuals have already flexibly accessed money purchase pensions, and are subject to the money purchase annual allowance, this will increase to £10,000 (from £4,000) from 6 April 2023.
  • There will be changes to the annual allowance taper for the highest earners. The taper will start kicking in at an “adjusted income” of £260,000 from 6 April 2023 (up from £240,000). The minimum tapered annual allowance will increase to £10,000 (from £4,000). This means that individuals with adjusted incomes of £360,000 or more will have the lowest tapered annual allowance of £10,000 (and it is worth noting that “adjusted income” is a broad measure that includes investment and rental income as well as employment income, and also adds in the value of employer funded pension savings).

These are significant changes for older employees with a good history of pensions savings. We can expect to see a revived interest in workplace pensions savings for employees who were close to or over the current lifetime allowance, and potentially a return to pensions savings for individuals who had stopped pensions contributions in order to retain a protected lifetime allowance under “fixed protection”. Any employees due to retire in the next few weeks with a lifetime allowance charge to pay are likely to want to defer drawing their pension until the new tax year, but it looks like the changes will not assist recent retirees.

There are some nuances that are not answered by the details released so far: for example, the policy paper says that the measure will remove the need for individuals to rely on protections from previous decreases to the lifetime allowance (individual or fixed protections) but it is not clear how this ties in with retaining tax-free lump sum protections.

Checklist for employers:

  • Cash allowance in lieu of pension contributions: What are the conditions of payment? Do the changes to the lifetime or annual allowances affect these?
  • Pension benefits/contributions: Is there a link with the lifetime or annual allowances, and if so, what are the effect of the changes?
  • Matching contributions: Do the terms of the matching contributions remain appropriate with the rise in the annual allowance and removal of the lifetime allowance?
  • Uncapped benefits: Are any benefits uncapped in reliance on there being a lifetime allowance? Does this remain appropriate?


The self-styled ‘back to work’ Budget contained reforms intended to incentivise the seven million economically inactive UK adults (excluding students) to fill the approximately two million job vacancies in the UK jobs market. The measures are outlined in a Labour Market Measures Factsheet.

For working parents, more support with childcare will be made available:

  • 30 hours of free childcare will be extended from the current parents of three to four year olds to all parents of nine month olds to when they start school. This will be rolled out in stages from April 2024 to September 2025.
  • Funding for childcare providers will be increased, including new start-up grants for childminders, and new funding for local authorities to set up wraparound care in schools from 8am to 6pm, from September 2024 to September 2026.
  • Universal Credit for working parents will be paid up-front, rather than in arrears, and the maximum amounts which can be claimed will increase to £951 for one child (up from £646) and £1,630 for two children (up from £1,108).

For the long-term sick and disabled, a new Health and Disability White Paper sets out the reform of disability benefits. The work capability assessment will be abolished, meaning that claimants will now only have to do one health assessment, rather than two, and claimants will be able to try work without fear of losing their financial support. There will also be a new programme of “universal support” to help disabled people find jobs, and new measures to help those with health problems to stay in work (with particular focus on those with mental health and musculoskeletal conditions). Two new consultations have also been promised on how best to increase occupational health across UK employers, covering potential regulatory options and tax incentives.

For those over the age of 50 (who account for around 3.5 million of the economically inactive), the pensions reforms outlined above are the headline measure. There will also be “returnerships”, a new style of apprenticeship which will focus on flexible skills training that takes into account previous experience. The Department for Work and Pensions’ “mid-life MOT” will also be enhanced for those still in work, and a further 8,000 “skills boot camp” places will be allocated in addition to the 56,000 currently on offer.

Although the Chancellor acknowledged the benefits of increased remote working for those with certain health conditions, there was no mention of any tax changes to hybrid working, despite the Office of Tax Simplification’s review of hybrid and distance working from December 2022. There was also no promise of public sector pay rises, with the Chancellor suggesting that lowering inflation would provide a solution to the ongoing pay disputes. Previously announced income tax rates also remain the same, with the additional (45%) rate of tax applying to income in excess of £125,410 with effect from 6 April 2023.

Share Incentives

As previewed in last year’s Autumn Statement, changes will be made to the legislation governing the Company Share Option Plan (CSOP) legislation, under which employees may be granted tax-advantaged “market value” share options that can normally be exercised without giving rise to the usual income tax or National Insurance charge on any gains.

From 6 April 2023, the maximum value of shares (measured at the time the option is granted) over which an employee may be granted a CSOP option will be increased from £30,000 to £60,000. Furthermore, the types of “eligible share” over which CSOP options can be granted will be widened by removing the restrictions that, where a company has more than one class of shares, the shares used in connection with a CSOP must satisfy certain additional criteria. These are welcome changes which will increase the attractiveness of CSOP options and enable a greater proportion of companies to offer CSOP options to their employees.

Amendments will also be made to the legislation governing Enterprise Management Incentive (EMI) option plans, under which qualifying small and medium-sized companies may grant tax-advantaged share options to employees. From 6 April 2023, companies granting EMI options will no longer need to set out in the agreement granting the option the details of any restrictions to which the shares under the option are subject. This is a welcome improvement that will ease the administrative burden on companies and avoid causing issues on corporate transactions where failures to meet this requirement create uncertainty around the tax-advantaged treatment of EMI options for employees. The process of granting EMI options will be further simplified by the removal of the requirement for a company to declare that an employee has signed a working time declaration, along with the penalty for failing to produce a signed working time declaration or to provide a copy to the employee.

Finally, the Government will be launching a consultation on the Share Incentive Plan (SIP) and Save As You Earn (SAYE) legislation that governs the UK’s all-employee tax-advantaged share plans. We intend to discuss with clients in the months to come how the SIP and SAYE legislation could be optimised to maximise employee participation. We anticipate that particular focus will be on:

  • the reduction of the five-year period that shares must be held in a SIP before benefiting from the full exemption from income tax and National Insurance; and
  • in light of the Government having already announced the reduction in the “annual exempt amount” from capital gains tax (CGT) over the next few years, whether shares acquired under a SAYE plan should be “rebased” for CGT purposes in the same way as SIP shares, meaning effectively that they will fall out of the scope of CGT, if they are sold at the time the SAYE options are exercised.

Click Here for the Original Article

Ireland // New regulations on transparent and predictable working conditions

As a result of the EU Directive on transparent and predictable working conditions, new regulations are now in force in Ireland. Some key features of the regulations, that employers should now be complying with, include:

  • Probationary periods: Probationary periods are now limited to a duration of six months unless there are exceptional circumstances and an extension is in the interest of the employee, in which case the duration may be extended to a maximum of 12 months. Probationary periods can also be extended beyond six months if the employee is absent during the probationary period.
  • Outside activities: Employers cannot prohibit employees working for another employer outside of their working schedule unless there are objective grounds for doing so (e.g. health and safety and protection of business confidentiality). Any restrictions on outside activities should be proportionate and set out in writing to the employee.
  • Terms of employment: The regulations expanded the list of written terms that must be given to employees when they start employment. Additional terms that now need to be provided within five days of employment include those relating to hours of work, probationary period and place of work (among others). Additional terms that now need to be provided within one month of employment include those relating to training entitlement and information relating to unpredictable work conditions (if applicable).
  • Mandatory training: The cost of any mandatory training must be covered by the employer. Time spent by an employee on such training should be regarded as working time and carried out within working hours (where possible).
  • Predictability of work: Employees with at least six months’ service and who have completed their probationary period can request more predictable and secure working conditions. Employers must respond in writing within one month.

Click Here for the Original Article

Artificial Intelligence is Increasingly Being Used to Make Workplace Decisions, but Human Intelligence Remains Vital

Companies are increasingly turning to artificial intelligence tools and analytics to reduce cost, enhance efficiency, raise performance, and minimize bias in hiring and other job-related decisions. The results have been promising–but concerns over fairness and objectivity persist.

Large employers are already using some form of artificial intelligence in employment decision-making. A February 2022 survey from the Society of Human Resources Management found that 79% of employers use A.I. and/or automation for recruitment and hiring.

The move by employers to harness A.I. and related data analytics in an effort to reduce unconscious bias in employment decision-making is no surprise. In the past few years, companies have increasingly prioritized diversity, equity, and inclusion initiatives. After the killing of George Floyd and subsequent protests around the country, businesses pledged $200 billion to increase efforts toward racial justice. Surveys show businesses are committed to increasing DEI budgets, staffing, and metrics, and investing more in employee resource and affinity groups. Pay equity audits are on the rise, along with a host of new laws in New York, California, and elsewhere mandating transparency on employee compensation.

A.I. has been proven to be helpful in a variety of areas related to hiring more diversely, including anonymizing resumes and interviewees, performing structured interviews, and using neuroscience games to identify traits, skills, and behaviors. Some companies conduct video interviews of applicants and use A.I. to analyze factors found within them, including facial expressions, eye contact, and word choice. This use of A.I. can help avoid decisions that treat similarly situated applicants and employees differently based on entrenched or unconscious bias, or the whims of individual decision-makers.

Consider a study conducted at Yale which showed that when assessing candidates for police chief, human evaluators justified choosing men without college degrees over women with college degrees because “street smarts” were the most important criteria. However, when the names on the applications were reversed, evaluators chose men with college degrees over women without college degrees, claiming that the degrees were the more important criteria. If the criteria had been set in advance, unconscious biases against women could have been mitigated because evaluators would not have been able to justify their decisions in retrospect. Unlike humans, A.I. tools won’t deviate from pre-selected criteria to rationalize a biased decision.

How does A.I. do it? In many instances, A.I. can reduce humans’ subjective interpretation of data because machine-learning algorithms are trained to consider only variables that improve predictive accuracy, McKinsey found. Algorithms can consider various characteristics on a resume–including a candidate’s name, prior experience, education, and hobbies–and be trained to consider only those characteristics or traits that predict a desired outcome such as whether a candidate will perform well once on the job. The results are impressive. In a forthcoming paper, Bo Cowgill at Columbia Business School will report the results of his study of the performance of a job-screening algorithm in hiring software engineers. He found that a candidate picked by the machine (and not by a human) is 14% more likely to pass an interview and receive a job offer and 18% more likely to accept a job offer when extended.

Algorithms are not only used for reducing bias in hiring. They are also useful in monitoring employee productivity and performance, and to make decisions regarding promotion and salary increases. For example, parcel delivery companies use A.I. to monitor and report on driver safety and productivity by tracking driver movement and when drivers put their trucks in reverse. Other companies may use A.I. to track employee login times and monitor whether employees are paying attention to their computer screens using webcams and eye-tracking software.

A.I. has even been helpful when choosing candidates for corporate boards. A study at the Fisher College of Business that compared the use of machine learning in selecting directors with human-selected boards found that human-chosen directors were more likely to be male, had larger networks, and had many past and current directorships. By contrast, the machine algorithm found that directors who were not friends of management, had smaller networks, and had different backgrounds than those of management but were more likely to be effective directors, including by monitoring management more rigorously and offering potentially more useful opinions about policy.

A.I. is not without its flaws. In 2018, Amazon abandoned an A.I. hiring practice when it determined it had actually perpetuated bias, largely as a result of the sample hiring and resume data the company provided to the algorithm, which skewed heavily male. Most resumes in the training data belonged to men, reflecting the disproportionate number of men in the tech sector, so naturally, the A.I. tool taught itself that men were preferable candidates. The tool then scored the resumes of people who attended “women’s” colleges or who played on the “women’s” chess team lower. Of course, the problem was not in the A.I. itself, but in the data inputs from the company.

Recognizing the blind spots associated with A.I., some companies have collaborated to develop policies that mitigate its potential discriminatory effects. Data & Trust Alliance is a corporate group that has developed “Algorithmic Bias Safeguards for Workforce” with the goal of detecting, mitigating, and monitoring algorithmic bias in workforce decisions.

Two states–Maryland and Illinois–have enacted statutes regulating the use of A.I. Illinois law requires employers to notify applicants when A.I. will be used and obtain the applicant’s consent. Proposed legislation in a third state, California, takes a page from the European Union’s General Data Protection Regulation (GDPR) by imposing liability on the vendors of A.I. tools.

Federal policymakers and regulators also have an important role to play in ensuring that A.I. is used in the service of advancing an equitable playing field in hiring and retention of qualified workers. Strong metrics and oversight will be needed to check even the smartest algorithms.

Historically, all technologies go through an adaptive phase where we get to know them, recognize their utility, and create methods to guard against their unintended, yet inevitable, deleterious effects. In the end, it is unlikely that there is going to be a one-size-fits-all approach to using A.I. effectively and responsibly. We will learn as we go, turning over many human tasks to machines even as we call upon our humanity to monitor them. Without question, our employment decisions will benefit from the right mix of A.I. with human intelligence.

Click Here for the Original Article

Children’s Privacy: Trends in Europe, the United States and Canada

As children increasingly participate online and at earlier ages, lawmakers around the world are passing legislation to bolster children’s privacy to varying degrees. This article contains a comparative analysis of some of these laws, highlighting trends in children’s privacy, which includes:

  • Children’s personal information is increasingly seen as particularly sensitive and deserving of heightened protections;
  • Lawmakers are concerned with ensuring organizations obtain valid consent for the collection, use and disclosure of children’s personal information;
  • Organizations collecting, using or disclosing children’s personal information should expect to face heightened reporting and administrative requirements (now and in the future);
  • Organizations must ensure automatically high default privacy settings for children; and
  • Violation of children’s privacy is likely to attract more regulator attention and significant penalties.


The European Union’s overarching General Data Protection Regulation (the “GDPR”) recognizes that children’s personal data should be afforded special protections because they may be less aware of the risks and consequences of data sharing. The GDPR particularly focuses on required consent from young data subjects for the processing of their personal data. Under the GDPR, the age of consent is at 16 years but allows individual member states to lower the age of consent to a minimum of 13 years old, a liberty that certain countries have taken. Article 8 states that a child’s consent is only valid if the holder of parental responsibility also gives consent, with Article 8(2) requiring reasonable effort on the part of the service provider to verify that the parent has given consent.

Regulators under the GDPR have shown that children’s privacy is taken seriously and that they will not shy away from issuing significant fines. The second largest-ever fine under the GDPR was issued in September 2022 for a violation of children’s privacy.

In addition to the GDPR, The Digital Services Act (the “DSA”) has been in force since November 2022. The DSA applies to “digital services”, meaning that it can be applied to a broad range of online services, from simple websites to internet infrastructure services and online platforms. The DSA bans platforms from delivering targeted advertisements to recipients when the platform is aware with reasonable certainty that the recipient of the service is a child. However, in adhering to this rule, the platform should abide by the principle of data minimization – meaning it should not incentivize providers of online platforms to collect the age of the recipient of the service prior to their use.

In the UK, the Age Appropriate Design Code (the “Code”) applies to a wide range of online services such as apps, games, connected toys and devices, and news services. Products and services within the scope of the Code must consider the privacy and protection of children, by design and default. If there is a conflict between the interests of the service and the child, the child’s best interest must be paramount.

United States

Children’s privacy in the US is governed by the Children’s Online Privacy Protection Act (“COPPA”). Enacted in 1998, it applies to websites and online services directed at children under 13 years of age. Factors that are considered when determining if a website or service is directed to children include its “visual content”, “presence of child celebrities”, and “music or other audio content”. Businesses that fall within the scope of COPPA are required to provide a clear and comprehensive privacy policy, direct notice of information practices to parents before collection of children’s data, and to ensure that the parent’s “verifiable” consent has been obtained. Parents have ongoing rights to review personal information collected about their child, revoke consent, and delete the child’s data. Businesses within COPPA’s regulatory ambit are not only responsible for their own compliance but are responsible for their vendors’ compliance as well.

Similarly to their European counterparts, regulators in the United States have also demonstrated a willingness to issue significant fines for violations of children’s privacy rules. In 2019, the Federal Trade Commission and the New York Attorney General issued a $170 million civil penalty for violations to COPPA. Specifically, it was alleged that the online service illegally collected personal information from children without their parents’ consent.

At the state level, lawmakers in California have passed into law the California Age-Appropriate Design Code Act (the “CAADCA”), taking effect July 1, 2024. The CAADCA applies to for-profit businesses that “provide an online service, product, or feature likely to be accessed by children” who are under the age of 18. The CAADCA creates several regulatory responsibilities for businesses that fall within its ambit. It introduces periodic reporting requirements and measures that companies must apply to enhance children’s privacy. Such measures include automatically configuring a high setting of privacy for children, clearly providing a privacy policy, and notifications if the child’s activity is being monitored. In addition to the added responsibility for covered businesses, the CAADCA restricts actions that can be taken, including using children’s personal information in a way that is detrimental to the child’s health and well-being, and retaining more information than necessary.

In 2023, Utah became the first state to enact laws limiting how children can use social media. While these two bills, collectively known as the Social Media Regulation Act (SMRA), are not directly aimed at protecting privacy they do have important privacy implications. The SMRA requires social media companies to obtain parental consent for any user under the age of 18. Enhanced privacy protections must then be put in place, for example, restricting the collection and sharing of personal information. However, the child’s privacy is also stripped away as social media companies must provide parents with access to the content and interactions of their child’s account. Infringements of the SMRA can result in injunctions and civil penalties against social media companies. In addition, the legislation authorizes individuals to sue social media companies for damages in the event that harm has been caused by SMRA violations.

The SMRA takes effect on March 1, 2024. Going forward, other states such as Arkansas, Texas, Ohio, Louisiana and New Jersey are also looking to pass legislation targeting social media companies, with similarly significant privacy implications for children.


Unlike its American and European counterparts, Canada does not currently have laws in force that are expressly dedicated to children’s privacy. While the Personal Information Protection and Electronic Documents Act (“PIPEDA”) does not differentiate between adults and youth, the Office of the Privacy Commissioner of Canada (the “OPC”) has consistently viewed personal information relating to youth and children as being particularly sensitive and must be handled accordingly. The OPC has also taken the position that in all but exceptional cases, parental consent must be obtained for the collection, use and disclosure of the personal information of children under the age of 13. In addition, Canada has signed and ratified the UN Convention on the Rights of the Child which protects children’s right to privacy.

There is, however, proposed legislation aimed at bolstering children’s privacy: Bill C-27’s Consumer Privacy Protection Act (the “Bill”). The Bill introduces new protections for children by requiring a higher standard of diligence and protection in respect to the collection and processing of their personal information. A child’s personal information would be the only prescribed category of “sensitive” information, meaning that it would always attract heightened protections, positive obligations for deletion, and a (likely) requirement for express consent for its collection, use or disclosure.

While no fines for violation of children’s privacy have been issued under PIPEDA, if the Bill is enacted, it will give the OPC the power to issue fines in this regard. There is an expectation that the OPC will be particularly interested in fines where children’s personal information is at issue given that it is the only prescribed category of sensitive information.

Click Here for the Original Article


Let's start a conversation

    Nicolas Dufour - EVP and General Counsel, Corporate Secretary

    Nicolas Dufour serves as EVP, General Counsel, corporate secretary, privacy officer, and a member of the executive management team for ClearStar. He is proficient in the FCRA, GLBA, Privacy Shield, and GDPR compliance, as well as other data privacy regimes and publicly traded companies' governance. He is responsible for managing all legal functions to support the evolving needs of a fast-paced and rapidly changing industry. His position includes providing legal guidance and legal management best practices and operating standards related to the background screening industry, federal, state, and local laws and regulations, legal strategic matters, product development, and managing outside counsels. He represents the company in a broad range of corporate and commercial matters, including commercial transactions, M&A, licensing, regulatory compliance, litigation management, and corporate and board governance. He researches and evaluates all aspects of legal risks associated with growth in to different markets. He assists the management team in setting goals and objectives in the development, implementation, and marketing of new products and services. He advises and supports management, Board of Directors, and operating personnel on corporate governance, company policies, and regulatory compliance.

    At ClearStar, we are committed to your success. An important part of your employment screening program involves compliance with various laws and regulations, which is why we are providing information regarding screening requirements in certain countries, region, etc. While we are happy to provide you with this information, it is your responsibility to comply with applicable laws and to understand how such information pertains to your employment screening program. The foregoing information is not offered as legal advice but is instead offered for informational purposes. ClearStar is not a law firm and does not offer legal advice and this communication does not form an attorney client relationship. The foregoing information is therefore not intended as a substitute for the legal advice of a lawyer knowledgeable of the user’s individual circumstances or to provide legal advice. ClearStar makes no assurances regarding the accuracy, completeness, or utility of the information contained in this publication. Legislative, regulatory and case law developments regularly impact on general research and this area is evolving rapidly. ClearStar expressly disclaim any warranties or responsibility or damages associated with or arising out of the information provided herein.


    Bursa escort - eskort mersin - youtube seo - escort - eskort eskişehir