AI Regulation & Legislation

A detailed guide to understand why AI regulation and legislation should be a core priority in your contact center’s operations to ensure compliance, protect customer data, and create a secure, efficient environment.

AI Regulation & Legislation:

What Contact Centers Need to Know

Contact centers have always been at the forefront of technological advancements.

They were among the first to develop interactive voice response (IVR) systems in the 1980s, answering basic questions for callers and routing customers to the right agent or department. Although such systems feel rudimentary now, they were game-changers then.

As the internet became a dominant channel for customer service in the late 1990s and early 2000s, contact centers were early adopters of the first chatbots, which, while based on layers of rules and if/then statements that often made for robotic and stilted customer interactions, were clearly harbingers of advancements to come.

In 2022, OpenAI introduced ChatGPT and showed the world what large-language models and generative AI could do. Once again, contact centers are center-stage, showing their ingenuity by identifying ways to harness AI for the benefit of their agents and customers.

As technology evolves and AI usage becomes widespread, the ethical and legal implications surrounding its use are accelerating.

As AI use grows, so have related ethical and legal concerns

Legislation and regulation regarding how AI is trained and deployed in the world have accelerated in the last several years.

Numerous legal cases are popping up, many of which specifically involve AI usage in customer service. We’ll dive into details below, but here’s a snapshot:

  • California, Colorado, and Utah all passed laws in recent years requiring various levels of company disclosures when customers are interacting with AI.
  • California has seen two notable cases challenging how AI tools use recorded customer calls without those customers’ consent.
  • Pennsylvania saw a similar case challenging the collection of online customer activity.
And the legal and legislative concerns don’t stop at the state level:
  • President Biden’s administration has signaled that it’s turning its attention to solving problems surrounding customer service chatbots.
  • A Canadian court forced an airline to make a passenger whole after their chatbot made up an answer about airfare.
  • The EU has passed major legislation this year directly regulating AI, which will undoubtedly have international reach.

Forward thinkers in CX must continue to think ahead

Staying abreast of these developments is crucial for businesses.

The legal cases and pending or passed legislation have serious implications for contact center operations, whether those operations remain solely in the United States or extend abroad.

We’ll discuss notable legislation and legal cases in each region, with an eye toward helping your contact center implement AI conscientiously so that you can empower your agents to do their best work and provide your customers with outstanding customer service.

All while keeping your business safe and compliant.

Current AI Legislation Contact Centers Should Monitor

California’s Bolstering Online Transparency Act (In Effect)

California is notable for prioritizing consumer rights, particularly when it comes to transparency and privacy in online business transactions.

The state has already passed two major laws, the California Online Privacy Protection Act and the California Consumer Privacy Act, protecting consumer privacy and mandating disclosures about how consumer data is processed, stored, and shared online.

So it’s no surprise that the state would also be concerned with California consumers’ right to know when they are talking to a chatbot and when they are talking to a real person.

 

The Bolstering Online Transparency Act of 2019 (or the Bot Act, in short) defines a bot as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” 

 

The law applies to any company with fewer than 10 million unique monthly users (which is most companies) that has a bot that interacts with a California resident on a website, social media, and mobile or desktop app.

 

To be clear: this isn’t just for companies based in California — it applies to companies who have bots that interact with California residents. 

 

The Bot Act requires bots to clearly and proactively disclose that they’re bots when they are being used to “incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” 

 

The act doesn’t specify a penalty or the right to bring private lawsuits for violations, nor could we find any examples of enforcement as of publishing. It’s likely the law would be enforced under existing California false advertising or fraud laws, with penalties such as fines or jail time.

The Colorado Artificial Intelligence Act (Passed)

Colorado passed the Colorado Artificial Intelligence Act (CO AI Act) in May 2024, which experts say is the most comprehensive law regulating AI systems in the United States.

The act has already been passed, but it gives companies until February 1, 2026 to make changes as needed to comply. While the main focus of the act is to regulate high-risk AI systems, it also includes disclosure requirements similar to those in California’s Bot Act.

The CO AI Act requires “deployers,” which it defines as anyone doing business in Colorado, to inform consumers when they are interacting with an AI system (whether high- or low-risk) unless “it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.”

 

There is no right to private litigation under the act, and the Colorado Attorney General has exclusive authority to make rules and enforce violations. Violations are considered deceptive trade practices and could carry penalties of up to $20,000 for each violation.

 

As with California, companies that are subject to the CO AI Act will also likely be subject to the Colorado Privacy Act, which protects the personal data of Colorado consumers, and requires companies to disclose how they handle data as well as allow consumers to have control over their data.

Utah’s Artificial Intelligence Policy Act (In Effect)

Utah’s Artificial Intelligence Policy Act (AI Policy Act) was passed in March 2024 and takes a two-tiered approach to requiring disclosures of the use of AI systems.

Companies in professions that are licensed and regulated by the Utah Department of Commerce, such as accounting firms and healthcare companies, are required to proactively inform consumers when they’re interacting with an AI system. Not only must the disclosure come at the beginning of the conversation, but it must also be “prominent,” although a “prominent” disclosure isn’t defined.

Companies in other professions (those regulated by the Utah Department of Consumer Protection) must disclose if the consumer asks whether they’re interacting with a bot or a human. In these cases, the disclosure must be made “clearly and conspicuously,” which is also not defined in the act.

 

Interestingly, the AI Policy Act also “expressly prohibits attempting to avoid consumer protection liability by blaming generative AI itself as an intervening factor. Thus, the UAIPA says that ‘[i]t is not a defense’ to assert that generative AI ‘made the violative statement; undertook the violative act; or was used in furtherance of the violation.’”

 

In other words, businesses can’t blame a chatbot if the bot hallucinates or gives a wrong answer and that answer results in a consumer protection violation. The business is responsible, not the bot. 

 

As with the other laws regulating AI transparency, there is no right to private ligation under Utah’s AI Policy Act. Each deceptive act or practice is considered a separate violation, and both the Utah Division of Consumer Protection (UDCP) and the Utah Attorney General (UAG) can enforce the act and impose fines. The UDCP may impose a fine of up to $2,500 per violation, while the UAG may also impose a fine of $5,000 per violation in addition to recovering any money that was gained during the course of a violation.

The “Time is Money” Initiative (Proposed)

Unlike the legislation above, the “Time is Money” initiative is an effort proposed by the Biden Administration in August 2024 to reform common business practices that tie up consumers in dark patterns that waste their time and lose them money.

The bulk of the initiative aims to improve consumer experiences around things like canceling memberships and subscriptions, airline cancellations and refunds, health insurance claims, and customer service “doom loops.”

However, the Administration has stated its intent to “crack down on ineffective and time-wasting chatbots used by banks and other financial institutions in lieu of customer service,” and with struggles around AI in customer service making the courts (and the news) so frequently this year, it’s likely we’ll see more from the Administration about it.

The Artificial Intelligence Act of the European Union (In Effect)

The Artificial Intelligence Act of the European Union (EU AI Act) came into effect in August 2024, covering all 27 EU member states, but it also applies to companies outside the EU if their AI systems are used by someone in the EU.

The act is considered “​​the world's first comprehensive regulatory framework for AI,” governing how AI can be used, providing a framework for regulating “high-risk” AI systems, and introducing transparency requirements for other kinds of AI.

The EU AI Act prohibits certain uses of AI, including using AI to manipulate individuals or materially alter their behavior, socially score individuals, exploit someone’s intellectual, social, or physical disability, identify persons based on their biometric data, and conduct law enforcement operations, among others.

 

The act also identifies some AI systems as high-risk if they’re products or used in products that are regulated by certain EU Laws and others as generally high-risk because of the context in which they’re used. For instance, AI systems used in health and public safety, in democratic and judicial processes, and in employment or hiring processes are all considered generally high-risk (among several others).

 

For these high-risk AI systems to be compliant with the EU AI Act, they must implement and maintain processes and documentation around their risk assessment and mitigation, data training and governance, testing and validation, cybersecurity, and more.

 

Most relevant for contact centers, however, are the EU AI Act’s AI transparency requirements for those who deploy an AI system. 

 

The act says that “[c]ertain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not.”

 

In these cases, the AI system (such as a chatbot or AI phone agent) should be able to inform a customer that they’re interacting with AI, unless “this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use.”

Litigation

As busy as it’s been for AI legislation, it’s also been an eventful time for litigation involving AI and customer service. Two notable cases were filed this year in California and are ongoing, and one was decided in Canada.

In February 2024, the Civil Resolution Tribunal of British Columbia (the Canadian equivalent of small claims court) ruled in favor of Jake Moffatt, who had brought a suit against Air Canada over the company’s refusal to issue a partial refund for tickets they had purchased under what they had been told was the airline’s bereavement fare policy.

 

Moffatt had purchased the tickets to attend the funeral of their grandmother, but not before they researched Air Canada’s bereavement fare policy by using a chatbot on the airline’s website. The chatbot wrongly advised Moffat that they could apply for a partial refund within 90 days of when their ticket was issued, information Moffat relied upon when they made the decision to purchase airfare.

 

When Moffat later applied for the partial refund (within the stated 90 days), Air Canada repeatedly denied their claims, even while admitting “the chatbot had provided “‘misleading words.’”

 

In the order detailing the decision and damages to be paid to Moffat, Tribunal Member Christopher C. Rivers wrote that the relationship between Moffat and Air Canada was one of service provider and consumer, and thus Air Canada owed Moffat a duty of care. In failing to exercise a duty of care, Rivers found that Air Canada made a negligent misrepresentation through its chatbot.

 

Air Canada tried to argue that it couldn’t be “held liable for information provided by one of its agents, servants, or representatives – including a chatbot” because “the chatbot is a separate legal entity that is responsible for its own actions.”

 

Rivers called this suggestion “remarkable,” noting that even if the chatbot was interactive, it was still only an element of the company’s website. 

 

Additionally, although a screenshot Moffat took of the conversation with the chatbot showed a link to a separate Air Canada website titled “Bereavement travel” which Air Canada said showed the correct policy, Rivers didn’t find this argument compelling either. 

 

Rivers wrote that Air Canada:

  • Didn’t take reasonable care to ensure its chatbot was providing customers with accurate information
  • Nor did the company explain “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot”
  • Nor “why customers should have to double-check information found in one part of its website on another part of its website.”

Ultimately, Rivers found that it was reasonable under the circumstances for Moffat to have believed that the information the chatbot provided was accurate and to have acted on that information, awarding Moffat $812.02 CAD in damages, fees, and interest.

In July of this year, Michelle Gills, a customer of Patagonia, filed a class-action lawsuit against the company alleging that Patagonia knowingly allowed a third party to “tap, intercept, receive, listen to, record, and use” the content of her conversations with Patagonia for its own purposes without her knowledge or consent. 

 

The lawsuit claims four causes of action: two violations of the California Invasion of Privacy Act, an invasion of privacy under California’s Constitution, and an intrusion upon seclusion.

 

Patagonia uses Talkdesk, a contact center as a service company (CCaS) and the third party named in the lawsuit, to communicate with customers, as well as for reporting, analytics, and quality management.

 

In the 18-page complaint, Gill cites a customer story on Talkdesk’s website featuring Patagonia and references the company’s Senior Manager of Customer Experience Operations by name as proof of Patagonia’s relationship with Talkdesk. 

 

The complaint further alleges that:

  • Talkdesk uses some portion of intercepted communications to train its AI models.
  • Talkdesk “uses at least a subset of customer data for Talkdesk’s internal business purposes, including the improvement or enhancement of (or new offerings related to) Talkdesk’s services.”
  • Patagonia intentionally used and installed Talkdesk’s products knowing it would intercept customers’ communications with the company.
  • “Patagonia knows that Talkdesk uses communications it collects via its products to advance its own business interests, because Talkdesk’s contract says that it can do so.”
  • Neither Patagonia nor Talkdesk obtain consent to intercept or record customer communications because Patagonia’s Privacy Notice is a non-binding browser wrap that is never presented to customers to obtain their consent.
  • Patagonia’s Privacy Notice fails to disclose that Talkdesk is intercepting or recording customer conversations and is worded in a way that a reasonable person would interpret to mean that no third parties are collecting customer communications.
  • “When callers call one of Patagonia’s support lines, they are told that the call “may be recorded for quality training purposes,” which a reasonable person would interpret to mean that the company is recording the call for internal purposes only.
  • Neither Patagonia nor Talkdesk inform customers that anyone else may be listening to or recording the call and neither obtain consent from the customer for a third party to listen in or record the call.

As of late July 2024, the lawsuit had been assigned a judge and deemed a complex case, with initial proceedings to begin in November 2024.

This case shares many similarities with the Patagonia case above, including the plaintiffs’ legal representation, Stephen Andrews and Christin Cho of Dovel & Luner, LLP.

 

Credit union customer Avner Paulino filed a class action lawsuit against Navy Federal Credit Union (NFCC) and CX automation platform Verint Systems Inc. 

 

Paulino alleges that NFCC knowingly allowed Verint, who NFCC contracted with for AI agent assistance and sentiment analysis, to tap, intercept, receive, record, and use the content of his communications with the credit union for their own purposes without his knowledge or consent.

 

The suit claims similar causes of action as the Patagonia case, with some additions: three violations of the California Invasion of Privacy Act, an invasion of privacy under California’s Constitution, an intrusion upon seclusion, and a claim of quasi-contract.

 

In early August 2024, NFCC and Verint filed a joint motion to dismiss the case. Paulino has until late September to file an opposition, with the hearing for the motion to dismiss scheduled for December 2024.

Recommend Actions for Contact Centers

Before we dig into the implications for contact centers, it’s important for you to protect your company and comply with any relevant legal requirements. You should consult legal counsel about any questions you have about compliance with AI and privacy laws in the jurisdictions in which your company operates.

Any guidance here is meant to help you create the best possible experience for your customers while allowing you to get the most out of your AI tools, but don’t mistake the discussions here for legal advice. We’re not lawyers, just CX professionals who care about your business and your customers.

Recent litigation illustrates the rising awareness customers have about what AI is and how it works, as well as their desire to have a say in when and how they interact with it.

Furthermore, it demonstrates that businesses do — as Tribunal Member Rivers put it — have a duty of care to consumers to be transparent about how they use AI, to ensure their AI systems are working properly, and to make customers whole when they aren’t.

Here are four actions we recommend contact centers take to make sure their AI and their customers are successful.

When you’re considering new vendors, especially if they offer AI-powered tools as part of their offerings, make sure you’ve defined your must-haves and must-not-haves for those AI tools and that your final vendor choices meet those standards. 

 

At a minimum, you should know:

 

  • How the AI tools work and how the vendor ensures their quality.
  • How the vendor will be storing and securing your company and user data, and how they detect, mitigate, combat, and communicate security and data breaches.
  • Whether the vendor will be using your company and user data to train their AI models and if (and how) you can opt out from this.
  • Whether (and how) the AI model can unlearn your company and user data if you need to revoke consent in the future.

 

You should also have a primary contact for each vendor that you can reach within a reasonable amount of time and who can answer technical questions about AI and privacy practices within their company. You will likely need to work closely with them to compose proper AI and privacy disclosures, and the last thing you want is for them to disappear once the contract is signed.

As you’ve seen with our review of AI legislation, not all laws and jurisdictions require the same level of disclosure. And as you’ve seen with our review of recent litigation, consumers get very upset when they learn a third party has seen and used their data without their knowledge and consent.

 

So rather than trying to strictly follow what’s legally required in a given jurisdiction, instead consider that the best customer experience is almost always an empowering and informative one. 

 

It’s thoughtful — and safest — to ensure customers understand as early as possible in conversation when they’re interacting with AI, how that AI is using their data, and how they can opt out of any of those uses.

 

This approach can work for your customers and your brand. Partner with your marketing team (and, of course, your legal team) to make these disclosures sound like your company wherever possible. There’s no reason they have to be boring, but they should prioritize clarity and transparency.

Review your terms of service/conditions and privacy policies regularly and ensure they reflect all your uses of AI systems.

 

This is an area where engaging a legal professional who specializes in these kinds of policies can be very helpful and save you a lot of time, money, and heartache later. 

 

If you’re using a chatbot or AI phone agent, you may also want to consider adding some kind of disclaimer to your terms of service/conditions that the chatbot can get things wrong and explaining what the customer should do when that happens. Your legal counsel can advise you on if this is the right move for your business.

It’s safest just to assume that, like humans, your AI systems are going to get things wrong sometimes. Have a plan for when that happens, and make sure your agents know the plan and are empowered to take action.

 

Remember, maintaining a good relationship with your customers is more important than defending your AI, not to mention cheaper than defending yourself in a lawsuit.

Make partners out of AI and your customers

AI technology is rapidly developing, and regulation is only now starting to ramp up. That means that we’re likely to see legislation accelerate to keep pace; we’ve already seen it in the laws passed and proposed just this year.

Likewise, as customers become more savvy about AI and how it is being used by the businesses they frequent, their expectations for how transparent those businesses should be about their use of AI will also become more sophisticated.

Unless businesses evolve to meet those expectations, we’re likely to see more litigation to resolve conflicts over privacy and AI practices between consumers and companies.

Contact centers that want to survive and thrive know to look ahead and not only embrace new technologies, but also to prepare for challenges before they come. We hope this ebook gives you the foreknowledge you need to build better AI and privacy processes for your company and to communicate more clearly and effectively with your customers.

In the grand tradition of CX forward thinkers, use what you’ve learned here to make partners out of AI and your customers – your business will be better for it.

Ready to outsource?

Get the latest update on how to effectively scale your customer service teams!

Ready to chat with us?

We’ve helped dozens of innovative companies launch and scale their customer service teams. Whatever you need to grow your business, our flexible offerings can fit. Let’s chat about how outsourcing can unlock new levels of growth for your business.