How can CX and UX teams use AI without losing their human-centric focus? We look at six realistic ways to leverage tech tools in a human-first environment.
“AI is great! It saves time, allows us to outsource time-consuming tasks, makes us more efficient, can generate text and images, and even takes notes for us.”
“AI is terrible! Writing the right prompts to get the results I want is frustrating, the data is full of inaccuracies, and it’s taking jobs away from writers, artists, designers, programmers, and researchers. Even when it transcribes voice notes, it still gets things wrong.”
Which of these two views do you lean towards?

What is AI, Really?
Like so many things in life, our collective reactions to AI exist on a spectrum that runs from intense enthusiasm to deep distrust. In reality, we need to acknowledge that there are more than two points of view on AI usage in business today.
What we need to keep firmly in mind is that AI is a tool. It’s a computer program, and it’s not inherently good or evil. Right now, AI is still in its growth stage – its difficult teen years, if you will.
Because AI isn’t fully mature, it’s still hard to figure out how best to integrate it. And in human-centered professions like CX, there’s an additional level of caution. How can we use AI without becoming fully dependent on it? In other words, can we have our AI tools and still stay human-centric throughout our processes?

6 Tips for Human-Centric AI Usage
At CX by Design, we believe the answer is yes – you can use AI without removing humans from the center of the process. Here are six tips to help you use AI without losing sight of your human-centeredness.

Understand how AI works
To use AI tools effectively, you need to understand the basics of how AI works – particularly if you’ll be using generative AI to create text, images, or mockups. Essentially, Artificial Intelligence is a great big summarizer and compiler. Many publicly available models like ChatGPT are based on LLMs, or large language models. An LLM uses millions or billions of parameters to navigate a huge dataset (like the bulk of the Internet). While there are other language models available, it’s expensive to use and train your own model. So a lot of small- and mid-size companies in particular rely on publicly available tools based on LLMs.
Have you heard the programming adage “Garbage In, Garbage Out’? It basically means you can’t expect quality results from lousy ingredients. Because LLMs are trained on a lot of inaccurate data, they can’t be expected to produce great results all the time. Sometimes they’ll be wrong. Expect this to happen (frequently) until AI matures.
It’s also worth noting that AI is context-blind. It doesn’t have the wealth of background knowledge that humans unconsciously bring to every interaction. So you can’t expect it to understand influencing factors like budget, business size, etc. unless you specifically include them in your instructions.
Fun fact: The GPT in ChatGPT stands for “Generative Pre-trained Transformer”. That offers a clue into the workings of AI: It has to be trained. And it can only transform, or iterate on, what’s already there. It can’t create new ideas from scratch. What it can do is mash up existing things in new ways – sometimes helpful, sometimes not.
In short, don’t expect AI to have human-level understanding and creativity. You wouldn’t expect a librarian to perform surgery or litigate a court case; you’d expect them to help find medical or legal information for a lawyer or a doctor. Keep that in mind as you set your expectations for working with AI. You are the professional.

Define your AI ethics
Laws governing AI usage depend on the country you’re operating in. In the US, there is still a lot of clarification to be done on how to use AI without infringing on others’ intellectual property. In fact, you can only copyright AI-generated works if there has been a certain amount of human input – more than just writing prompts.
So, for now, the decision on how to implement AI ethically rests with you. Are you comfortable using a technology that may be based (unacknowledged and unauthorized) on others’ work? How about data privacy restrictions and ethical data usage? You may want to look into Fair Use arguments and decide what you’re comfortable supporting. At the very least, try to understand where the data powering your AI model of choice is coming from. If it’s from a large open model, proceed with caution.

Verify and proofread everything
In AI we don’t trust. Why not? Because AI makes mistakes. In a recent CX by Design team meeting, we talked about Lis Hubert’s participation in DevFest; in the meeting summary, our AI notetaking program says we discussed Death Vest. This gave the team a good laugh, but imagine if this had been a client meeting and we’d sent the notes without proofreading them first.
Mistakes are one thing, but AI also hallucinates – or makes a guess if there’s no information on a given topic. We’ve seen AI list copywriter rates at over $60,000 per hour, recommend swimsuits as corporate wear, and draft some truly bizarre emails. Check out Everypixel’s AI Hallucination Examples and Why They Happen for some additional instances. As you’ll see, AI isn’t great at distinguishing details or admitting when it doesn’t know the answer. That’s how it’s been programmed, at least for now. Hopefully this will improve with additional training.
In summary, always check everything AI generates. If you don’t believe us, just cast your mind back to the last time you used voice-to-text or autocorrect. How often did the text engine correctly catch your meaning? Exactly.

Treat AI like a tool, not a replacement
We doubt CXers are bothered by tools like Asana or Figma. In fact, most people favor these programs because they allow us to collaborate and communicate quickly. This is especially true for distributed teams.
In the ‘AI is terrible’ scenario above, did you get the idea that the imaginary speaker saw AI as a replacement? A lot of workers do – and to be fair, we have seen AI cut into a lot of job functions. But could this be a case of misapplied AI? Instead of relying on AI to take over job functions and cut out the human element, can we train humans to use AI responsibly?
What does responsible AI use look like?
- A UX designer asks AI to come up with a few design ideas for a new feature, then refines aspects from several of the ideas to create a wireframe to test with human users.
- A CX consultant asks AI to summarize recent research from a trusted source. Then the consultant decides if they want to read the entire report or look for another source.
- A UI designer uses an AI tool to analyze their design for possible accessibility issues before submitting it for review.
- A UX writer uses AI to quickly provide microtext for a wireframe, then evaluates and adjusts the text to match the desired brand voice and tone.
- An animator uses AI-generated images along with actual photographs for reference and then draws or creates a finished product.
What doesn’t responsible AI use look like?
- A UX designer asks AI to come up with a few design ideas for a new feature, then picks one and passes it off as their own work.
- A CX consultant asks AI to summarize recent research without specifying the source and uses these ‘findings’ without verification.
- A UI designer uses an AI tool to analyze their design for possible accessibility issues and submits it without any human user testing.
- A UX writer uses AI to provide microtext for an app, but doesn’t make any changes to the text in later iterations.
- An animator uses AI-generated images in their work without acknowledging it or verifying their accuracy.
In irresponsible scenarios, AI is replacing the human/user element. It’s shortcutting the process, but at the wrong point. Instead of using it to increase quality, the main focus is decreasing time and costs.
In responsible AI use, humans and users are not cut out of the picture. They’re still involved, but AI has done some of the grunt work first. Yes, the process has been streamlined, but the human-centric formula is still intact. There are still humans fully participating in iteration, design, and testing.
Most critically, AI’s contributions are being tested and verified by actual humans. The focus has shifted from using AI to cut costs to having humans use it to improve quality – just like any other piece of software.

Spend the time you saved
So, you’re using AI to streamline ideating and iterating. What will you do with the time you’ve saved?
Ideally, you can reinvest that time in user- and human-centered ways. Instead of thinking “I saved three hours on this design so we can get it to market three hours faster”, can you switch to “I saved three hours here, so let’s invest them in user testing, accessibility analysis, or edge case explorations”?
Realistically, though, you may be under strict time and budget constraints. You may be stretching every hour and every dollar as it is. In that case, an AI tool is still no substitute for testing with actual users – but it’s better than nothing. By getting feedback from customer-facing employees and even team members not involved in the design work, you can remain reasonably human-centered.

Choose personalization over pasting
Finally, let’s talk about style. Did you know that AI has an identifiable verbal and visual style? AI-generated text tends to be wordy and bland, while AI art often looks overly stylized and contains odd details. If you choose to use unmodified gen-AI output, it will be fairly obvious – and it won’t reflect well on your business.
Remember that strictly AI-generated works can’t be copyrighted, so you also risk having your images and articles show up on your competitors’ sites and collateral. Not a good look for anyone.
So, if you decide to use AI, decide in advance how you’ll adjust it to suit your branding. At the barest minimum – and this is if you don’t mind using uncopyrightable content – make sure all AI-generated content and images are reviewed by humans and accurately reflect your intentions and your brand persona.
Most businesses have someone who can adjust AI tools’ output enough to produce something that’s consistent with their brand and intentions. Copy-paste is not an effective solution for establishing a customer experience or a brand identity.

Human-Centered AI Is Possible
In conclusion, you don’t have to choose between using AI and being a human-centered business. If you envision AI as a tool rather than a replacement for employees, you can get the benefits of both: AI’s efficiency and humans’ creativity and understanding. The main idea is to keep humans involved in every process, using AI as an expediter.
We’ve discussed how to evaluate where AI fits in your business in our earlier articles, Limitless Capabilities: CX in the New AI World and Creating Long-Term CX Gains. Check them out for additional details. And as always, feel free to contact us for a free consultation if you’d like help integrating AI into your business.