A Practical Approach to AI Transparency

AI is all around us now — or at least it feels that way!

But it feels that way for good reason. There’s a lot of hype and a lot of substance behind the technology powering generative AI like ChatGPT and Gemini. Even if there are still a lot of things evolving with AI, it’s already an incredibly useful tool that can save us and our customers time, money, and energy.

Wading through that hype is the hard part, however. 

There’s so much enthusiasm for the technology that it can be difficult to slow down and consider all of the implications that come with using AI in the real world.

Whenever I start consulting for a new company, invariably one of their first questions is, “How can we implement AI to benefit our team and our customers?”

My answer to that question is always (with much understanding and humor), “Very carefully.”

I say that not just because I’m a customer support professional and I always want to make sure that an AI tool makes sense in the context of a particular company’s customers and needs, but also because there are practical considerations involved with using AI that go beyond implementation.

Implementation is just the tip of the AI iceberg, and you can’t focus on implementation until  you’ve taken the steps to understand the AI systems you’re using — how they work, how they’re trained, and why they come to certain conclusions. 

After implementation comes presentation — communicating all of that same information (and then some) to your customers so that the AI, in whatever form it’s taking, can actually be useful.

What I’ve just described is AI transparency, and it’s not an understatement to say it’s the most important concept you’ll ever hear about when evaluating and using AI in your business.

Why AI transparency is important

We’ll get into the specifics of what it means to be transparent about AI in a moment, but first, I think we have to put AI transparency in context. 

A lot of work has to go into getting AI transparency right, and unless the argument for it is crystal clear, it can be easy to justify skipping the work altogether.

There are a number of factors at play influencing the need for AI transparency:

  • Legislation and regulation (at the state, national, and international levels).

  • Litigation (some ongoing).

  • Ethical considerations (it’s just the right thing to do).

Legislation and regulation

As someone who cares deeply about customers, I never want to do something just because it’s legally required, but the simple fact is that complying with the law will (and should) always be a business’s number one priority. Thankfully, the law and customer interest frequently align, so this is rarely a conflict.

We should be transparent with our customers regarding our use of AI because there’s a very good chance that our business operates or interacts with customers in a jurisdiction that requires it. 

California, Utah, and Colorado have all passed legislation requiring some level of disclosure around the use of AI and/or how it processes data, and the Biden administration recently announced their “Time is Money” initiative, indicating their intent to broadly reform customer service practices, including some involving the use of AI chatbots. 

In the EU, the Artificial Intelligence Act of the European Union was approved this year, the provisions of which, among many other requirements, impose AI transparency obligations with a territorial scope similar to that of GDPR. It’s thought more regulation from the EU surrounding AI will be forthcoming.

There are also many existing privacy laws at the state, federal, and international levels that regulate how companies and AI systems can use consumer data and what they must disclose about how they use that data.

Litigation

Of course, litigation can also have a great effect on both law and business behavior, and we’ve seen a few notable cases recently regarding the use of AI in customer service contexts.

In February 2024, Air Canada was forced to give a consumer a refund by the Civil Resolution Tribunal of British Columbia after their chatbot made up an answer regarding refunds for bereavement fare that the consumer relied upon when booking a flight. The consumer brought the case to court after Air Canada refused to honor the chatbot’s incorrect answer and give a refund.

Two recent cases in California underscore the dangers of allowing AI vendors to record customer data or use customer data to train their AI systems without customer consent:

  • In a class action lawsuit against Navy Federal Credit Union, customers are suing the credit union for allegedly allowing Verint, a company that makes software for contact centers, to “intercept, analyze, and record all customer calls without proper disclosure to or consent from the customers.”

  • In a similar class action lawsuit, this time against Patagonia, a customer is alleging that “neither Talkdesk [software used by Patagonia] nor Patagonia disclose to individuals that their conversations are being intercepted, listened to, recorded and used by Talkdesk.”

It’s clear from these cases (and from emerging legislation that’s responding to consumer concerns) that many customers are greatly troubled by the idea that unknown parties are listening in to their conversations without their knowledge or consent, then using what they hear for purposes that haven’t been made clear.

The mistake I frequently see company leadership make is that they fail to understand AI this way: It is essentially a stranger reading and — in some circumstances — recording conversations with customers. 

They get so caught up in the excitement of what the technology can do that they fail to stop and consider the ethical implications of it all — and what that means regarding responsibility to their customers.

Ethical considerations

This brings me to the final factor influencing AI transparency: We should be transparent about our use of AI simply because it’s the right thing to do. 

From an ethical standpoint, customers have a right to know who’s involved and to have a say in what happens to the information they share in their interactions with companies.

I could cite statistics here about how building trust and rapport with customers is good for business, but I don’t think I have to. We’re all professionals here, and moreover, we’re humans; we know businesses thrive through relationships, we know relationships are built on trust, and we know trust is built on honesty.

A practical guide to navigating AI transparency

Thankfully, as I mentioned before, our legal and ethical obligations are aligned when it comes to AI transparency. 

But knowing our responsibilities and executing them are often two very different things, especially when the AI landscape is changing so rapidly and few of us are experts.

We also have to acknowledge that unless you’re an AI company yourself, you’re not going to be building the AI systems you’re using in your business, which means that your control over how those systems work is limited.

Knowing this, the rest of this guide will focus on offering practical advice on the aspects you can control: choosing the right AI system for your business and your customers, gathering key information, ensuring safeguards are in place, and communicating all of this to your customers.

What AI transparency means when evaluating AI tools

In order to prioritize AI transparency for your customers later, you’ll have to prioritize AI transparency at the very beginning. 

Alongside evaluating AI tools for key features, scalability, and pricing, here are five factors to consider as you’re comparing AI tools:

  1. How the AI system operates and comes to conclusions: The AI vendor should be able to clearly explain to you the internal processes, datasets, algorithms, structures, etc., that make the AI system function. They should also be able to articulate to you how the AI system makes decisions or presents results and how they verify the veracity of both.

  2. How your company’s (and by extension, your customers’) data is being used: The AI vendor should be able to explain how your company’s data is handled and whether it is kept separately from or pooled with other clients’ data. If the latter, they should explain how it is anonymized and whether that data is used for training the AI system.

  3. What control your company and your customers have over how data is used: The AI vendor should be able to explain what mechanisms they have in place to keep your company data isolated from other clients’ and to opt out of the AI system using company or customer data for training. They should also be able to explain whether the AI system is capable of un-learning if your company or customers revoke consent for data collection in the future.

  4. How your (and, by extension, your customers’) data is secured and protected: The AI vendor should be able to explain what security measures they have in place when storing your data as well as what monitoring and alerting systems they have in place to detect, combat, and communicate breaches.

  5. What technical support they provide regarding regulatory compliance: The AI vendor should be able to explain what support, if any, they provide regarding compliance with ongoing privacy, security, and data processing disclosures as the regulatory landscape evolves.

Before you commit to an AI-powered tool, be sure you know what your requirements and deal-breakers are for each of these factors. Screen AI vendors accordingly. Remember, you’re ultimately responsible for any AI tool you use.

To quote coverage of the Patagonia lawsuit: “Indeed, these [Contact Center as a Service] providers must now consider: how many of our customers are going to get sued? Because Talkdesk didn’t get sued, its customer did.”

What AI transparency means to your customers

You’ve done your due diligence, you’ve put in the technical work to launch your AI tool, and now it’s time to put in the honest work to make your AI as transparent as possible to your customers.

Since customer-facing AI tools are usually bots of some kind, my advice is geared toward keeping customers informed about that type of tool, but these tips can be adapted for other use cases as well.

Here are seven things I recommend you communicate to your customers when implementing an AI bot:

  1. Tell customers when they’re talking to a bot. You can’t skip this one — in some states, you’re legally required to proactively disclose when a customer is talking to a bot, but it’s good practice regardless. This is an opportunity to show your brand’s personality, but it can also be a simple opener like, “Hi, I’m a bot! I’m here to help you.”

  2. Give data, privacy, and security disclosures and controls. Depending on the nature of your AI bot, you may need to do this proactively at the beginning of the interaction. Otherwise, you might be able to link to policies, disclosures, and consent/control forms. Regardless, it’s good practice to ensure customers are informed about who has access to their data, how it’s being handled, and the option to opt-out of certain uses.

  3. Explain why a bot is being used. This is frequently overlooked, but if you briefly explain why you’re using a bot in a certain way, customers will likely feel more positive about the experience. For example, if you’re using a bot to help a customer look up details about their order quickly without having to wait for a human agent, tell them so!

  4. Explain how the bot works. Make sure your customers know how to interact with the bot to get what they need and understand what the bot can do. For instance, explain if they need to click a button, whether they need to type or say a few words, or if they can have a conversation with the bot. Never turn your customers into QA testers.

  5. Explain the limitations of the bot. Be clear and upfront about what the bot can’t do. For example, if the bot can look up details of orders but can’t manage them (like canceling or processing refunds), make sure the bot is able to communicate that in the conversation with the customer.

  6. Make it easy to reach a human. I know you’re likely using a bot to free up a human agent, but not every customer is going to want to talk to a bot, and not every problem can be solved by a bot. Help the people you can with the bot, and make it easy for others to talk to a human. Customer needs come first.

  7. Give alternatives if the bot starts to misbehave. Make sure there’s an off-ramp for customers if the bot starts to hallucinate or otherwise seems to be giving incorrect information. This can be as simple as an instruction to use a specific command if something seems incorrect or always offering the option to talk to a human agent. Also, make sure your human agents are empowered to make things right if a bot has caused harm to a customer.

AI transparency isn’t a one-time thing

As AI evolves, so will our understanding of what AI transparency means for our companies and our customers. It’s not something that we can research and publish once and be done — we have to be willing to change our practices as the technology advances.

Striving for AI transparency is a process, and honestly, sometimes it's tedious work that requires investment. But we do it because we appreciate our customers and we want to be responsible brands for them.

In my opinion, maintaining transparency also brings peace of mind. As a business, you can be confident that you’re doing what you need to do to take care of your customers, stay compliant, and remain competitive.

And that’s priceless.

Like what you see? Share with a friend.