Pachyderm has been acquired by Hewlett Packard Enterprise - Learn more

LivePerson uses Pachyderm to Create Better Customer Conversations

LivePerson connects people and brands through their AI-powered Conversational Cloud. Their platform lets brands provide conversational experiences on messaging channels like SMS, WhatsApp, Apple Business Chat, and more, so people can stop wasting time on hold and crawling through websites. On top of that, LivePerson’s conversational AI lets brands scale their ability to hold conversations with as many of their customers as possible, as quickly as possible.

With more than 18,000 customers, including HSBC, Orange, GM Financial, and The Home Depot, LivePerson delivers rich conversational solutions to power sales, marketing, and customer care for many of the world’s top brands. They were recently recognized as one of Fast Company’s World’s Most Innovative Companies as more and more brands turned to conversational AI to meet the surge of demand from customers on digital during the pandemic.

The Challenge: Scaling Customer Service with NLP 

Behind the scenes, LivePerson’s Conversational Cloud lets brands build their own intent classifier along with a number of other Natural Language Processing tasks. An intent classifier tries to figure out what a customer wants to do when they call or write in to the system.

Do they need help with a return? Is their product broken? Do they need help getting it setup?

The difference was an order of magnitude faster… If it took 10 hours on the old system then it would only take an hour with Pachyderm.


The intent classifier learns on the brand’s own data and gets highly tailored to the kinds of questions their customers ask. LivePerson’s data science team pre-bakes classifiers for different use cases, like financial services, telco and retail but it still needs to learn from a brand’s own unique interactions. The Conversational Cloud lets brands continually feed data to it to update the model’s understanding of the world. It also delivers a dashboard so users can see the performance of the classifiers in action, including getting as detailed as zooming into the individual message level to see how each is treated by the system.

As more and more brands ramped up conversational experiences for their millions of customers, LivePerson faced a challenge: quickly scaling up the Conversational Cloud’s capacity. A popular service with a huge influx of customers can push a system to its limits fast.

Data Silos Created Barriers to Better Conversational AI

Before the team brought in Pachyderm, they were using multiple different Python scripts running across many independent nodes. Their data was widely dispersed, with no standardized repository to keep track of it. It all ran on a cluster of powerful machines but each machine had its own discrete, unshared storage. If they wanted to do a grid search for a certain model they were optimizing, they’d have to track jobs across different, loosely connected servers. They monitored everything manually, meaning any issues would require them to go to the individual server to troubleshoot it.

We needed a standardized experimentation framework, with the ability to modify scripts with different parameters and rerun experiments, and to have all our data in one place, shared across the machines. That’s where Pachyderm’s powerful pipelining and scaling platform came into the picture.

Orchestrating Innovation with Pachyderm

Pachyderm enables LivePerson to leverage the power of Kubernetes and containers to easily scale their jobs across their machines from a single shared storage location. It automatically broke the job into parts and scheduled it out with no wasted resources. It also made it simple for data to get transformed and to output as a freshly updated model. With the click of a button a customer could kick off a job, get their data committed to a repo and Pachyderm would run the job in the background and spit out a dataset.

Pachyderm's Console made it a lot easier to troubleshoot, because they were no longer tracking jobs across individual nodes, but across the entire cluster. Because it could break jobs down into smaller parts it also made it much easier to really scale and it cut the wait time for customers training their classifier.

The data science team also got faster as they did experiments and built their own domain specific classifiers. When they couldn’t perform experiments fast enough, it was a real bottleneck. The team could now deliver new classifiers to the platform at much more rapid rates, which saved customers time because it sped up transfer learning for telcos and retail and other use cases that had a similar structure.

In the end, it always comes down to speed - and Pachyderm delivers the speed that LivePerson needs to power conversations between brands and consumers today and in the future, as this new way of doing business continues to accelerate.

Data Pipeline

Transform your data pipeline

Learn how companies around the world are using Pachyderm to automate complex pipelines at scale.

Request a Demo