Do they need help with a return? Is their product broken? Do they need help getting it setup?
The intent classifier learns on the brand’s own data and gets highly tailored to the kinds of questions their customers ask. LivePerson’s data science team pre-bakes classifiers for different use cases, like financial services, telco and retail but it still needs to learn from a brand’s own unique interactions. The Conversational Cloud lets brands continually feed data to it to update the model’s understanding of the world. It also delivers a dashboard so users can see the performance of the classifiers in action, including getting as detailed as zooming into the individual message level to see how each is treated by the system.
As more and more brands ramped up conversational experiences for their millions of customers, LivePerson faced a challenge: quickly scaling up the Conversational Cloud’s capacity. A popular service with a huge influx of customers can push a system to its limits fast.
Before the team brought in Pachyderm, they were using multiple different Python scripts running across many independent nodes. Their data was widely dispersed, with no standardized repository to keep track of it. It all ran on a cluster of powerful machines but each machine had its own discrete, unshared storage. If they wanted to do a grid search for a certain model they were optimizing, they’d have to track jobs across different, loosely connected servers. They monitored everything manually, meaning any issues would require them to go to the individual server to troubleshoot it.
Pachyderm enables LivePerson to leverage the power of Kubernetes and containers to easily scale their jobs across their machines from a single shared storage location. It automatically broke the job into parts and scheduled it out with no wasted resources. It also made it simple for data to get transformed and to output as a freshly updated model. With the click of a button a customer could kick off a job, get their data committed to a repo and Pachyderm would run the job in the background and spit out a dataset.
Pachyderm's Console made it a lot easier to troubleshoot, because they were no longer tracking jobs across individual nodes, but across the entire cluster. Because it could break jobs down into smaller parts it also made it much easier to really scale and it cut the wait time for customers training their classifier.
The data science team also got faster as they did experiments and built their own domain specific classifiers. When they couldn’t perform experiments fast enough, it was a real bottleneck. The team could now deliver new classifiers to the platform at much more rapid rates, which saved customers time because it sped up transfer learning for telcos and retail and other use cases that had a similar structure.
In the end, it always comes down to speed - and Pachyderm delivers the speed that LivePerson needs to power conversations between brands and consumers today and in the future, as this new way of doing business continues to accelerate.