Evaluating Customer Service Products

Framework-based, In-depth Product Evaluation Reports

We recently published our Product Evaluation Report on Desk.com, Salesforce’s customer service offering for small and mid-sized businesses. “Desk” is a very attractive offering with broad and deep capabilities. It earns good grades on our Customer Service Report Card, including Exceeds Requirements grades in Knowledge Management, Customer Service Integration, and Company Viability.

We’re confident that this report provides input and guidance to analysts in their efforts to evaluate, compare, and select those customer service products, and we know that it provides product assessment and product planning input for its product managers. Technology analysts and product managers are the primary audiences for our reports. We research and write to help exactly these roles. Like all of our Product Evaluation Reports about customer service products that include multiple apps—case management, knowledge management, web self-service, communities, and social customer service—it’s a big report, more than 60 pages.

Big is good. It’s their depth and detail that makes them so. Our research for them always includes studying a product’s licensed admin, user, and, when accessible, developer documentation, the manuals or online help files that come with a product. We read the patents or patent applications that are a product’s technology foundation. Whenever offered, we deploy and use the products. (We took the free 30-day trial of Desk.) We’ll watch suppliers’ demonstrations, but we rely on the actual product and its underlying technologies.

On the other hand, we’ve recently been hearing from some, especially product marketers when they’re charged to review report drafts (We never publish without the supplier’s review.), that the reports are too big. Okay. Point taken. Perhaps, tt is time to update our Product Evaluation Framework, the report outline, to produce shorter, more actionable reports, reports with no less depth and detail but reports with less descriptive content and more salient analytic content. It’s also time to tighten up our content.

Product Evaluation Reports Have Two Main Parts

Our Product Evaluation Reports have had two main parts: Customer Service Best Fit and Customer Service Technologies. Customer Service Best fit “presents information and analysis that classifies and describes customer service software products…speed(ing) evaluation and selection by presenting easy to evaluate characteristics that can quickly qualify an offering.” Customer Service Technologies examine the implementations of a product’s customer service applications and their foundation technologies as well as its integration and reporting and analysis capabilities. Here’s the reports’ depth and detail (and most of the content). Going forward, we’ll continue with this organization.

Streamlining Customer Service Best Fit

We will revamp and streamline Customer Best Fit, improving naming and emphasizing checklists. The section will now have this organization:

  • Applications, Channels, Devices, Languages
  • Packaging and Licensing
  • Supplier and Product
  • Best Prospects and Sample Customers
  • Competitors

Applications, Channels, Devices, Languages are lists of key product characteristics, characteristics that quickly qualify a product for deeper consideration. More specifically, applications are the sets of customer service capabilities “in the box” with the product—case management, knowledge management, and social customer service, for example. Channels are assisted-service, self-service, and social. We list apps within supported channels to show how what’s in the box may be deployed. Devices are the browsers and mobile devices the product supports for internal users and for end customers. Languages are two lists: one for the languages in which the product deploys and supports for its administration and internal users and one for the languages it supports for end customers.

Packaging and Licensing presents how the supplier offers the product, the fees that it charges for the offerings, and the consulting services available and/or necessary to help licensees deploy the offerings.

 Supplier and Product present high level assessments of the supplier’s and the product’s viability. For the supplier, we present history, ownership, staffing, financial performance, and customer growth. For the product, we present history, current development approach, release cycle, and future plans.

Best Prospects and Sample Customers are lists of the target markets for the product—the industries, business sizes, and geographies wherein the product best fits. This section also contains the current customer base for the product, a list of typical/sample customers within those target markets and, if possible, presents screen shots of their deployments.

 Competition lists the product’s closest competitors, its best alternatives. We’ll also include a bit of analysis explaining what make them the best alternatives and where the subject product has differentiators.

Tightening-up Customer Service Technologies

Customer Service Technologies is our key value-add and most significant differentiator of our Product Evaluation Reports. It’s why you should read our reports, but, as we mentioned, it’s also the main reason why they’re big.

We’ve spent years developing and refining the criteria of our Evaluation Framework. They criteria are the results of continuing work with customer service products and technologies and our complementary work the people who are product’s prospects, licensees, suppliers, and competitors. We’re confident that we evaluate the technologies of customer service products by the most important, relevant, and actionable criteria. Our approach creates common, supplier-independent and product-independent analyses. These analyses enable the evaluation and comparison of similar customer service products and results in faster and lower risk selection of a product that best fits a set of requirements.

However, we have noticed that the descriptive content that are the bases for our analyses has gotten a bit lengthy and repetitive (repeating information in Customer Best Fit). We plan to tighten up Customer Service Technologies content and analysis in these ways:

  • Tables
  • Focused Evaluation Criteria
  • Consistent Analysis
  • Reporting

Too much narrative and analysis has crept into Tables. We’ll make sure that Tables are bulleted lists with little narrative and no analysis.

Evaluation criteria have become too broad. We’ve been including detailed descriptions and analyses of related and supported resources along with resources that’s the focus of the evaluation. For example, when we describe and analyze the details of a case model, we’ll not also describe and analyze the details of user and customer models. Rather we’ll just describe the relationship between the resources.

Our analyses will have three sections. The first will summarize what’s best about a product. The second will present additional description and analysis where Table content needs further examination. The third will be “Room for Improvement,” areas where the product is limited. This approach will make the reports more actionable and more readable as well as shorter.

In reporting, we’ll stop examining instrumentation, the collection and logging of the data that serves as report input. The presence (or absence) of reports about the usage and performance of customer service resources is really what matters. So, we’ll call the criterion “Reporting” and we’ll list the predefined reports packaged with a product in a Table. We’ll discuss missing reports and issues in instrumentation in our analysis.

Going Forward

Our Product Evaluation Report about Microsoft Dynamics CRM Online Service will be the first to be written on the streamlined Framework. Expect it in the next several weeks. Its Customer Service Best Fit section really is smaller. Each of its Customer Service Technologies sections is smaller, too, more readable and more actionable as well.

Here’s the graphic of our Product Evaluation Framework, reflecting the changes that we’ve described in this post.

Slide1

Please let us know if these changes make sense to you and please let us know if the new versions of the Product Evaluation Reports that leverage them really are more readable and more actionable.

Advertisements

Nuance Nina Web

Flexible and Accurate Answers to Customers’ Questions

We published our Product Evaluation Report on Nina Web from Nuance Communications this week. Nina Web is virtual assisted-service software for web browsers on desktops, laptops, and mobile devices. Type a question in a text box and a Nina Web-based virtual agent will deliver an answer or will engage you in a dialog when it needs more information to answer your question. Answers are text, images, links, URLs, and/or data from external applications.

Nina Web was originally developed as VirtuOz Intelligent Virtual Agent by VirtuOz, Inc., a privately held firm founded in France in 2002. Nuance acquired VirtuOz in March 2013. Nina Web became the third member of the Nina family of customer self-service offerings from Nuance’s Enterprise division, joining Nina IVR and Nina Mobile.

Nuance has made and continues to make significant improvements to the VirtuOz IVA. A bit less than a year after the acquisition, Nina Web is stronger and more attractive virtual agent offering, earning good grades on our Report Card for Virtual Assisted-Service. (See the Product Evaluation report for the details.)

Most significantly, Nuance’s Enterprise division developers have just about completed what they call a “brain transplant” for Nina Web, replacing the question analysis and matching technology built by VirtuOz with Nuance’s Natural Language Understanding (NLU) technology, the same technology used by Nina IVR and Nina Mobile. NLU combines Natural Language Processing (NLP) with statistical machine learning. NLP does some parsing and linguistic analysis of customers’ questions. Statistical machine learning, which Nuance implements in neural networks, matches customers’ questions with typical and expected “User Questions” and variations of User Questions that analysts create and store in Nina Web’s knowledgebase. Analysts also create knowledgebase answers and associate an answer with each User Question. When NLU matches a customer’s question with a User Question, Nina Web presents the answer associated with the User Question to the customer.

Analysts “train” NLU’s machine learning algorithms with User Questions and their variations. Nina Web provides the facilities and tools for initial training and ongoing refinement/retraining. Analysts add, delete, and modify User Questions as the intent and the vocabulary of customers’ questions changes to ensure that their Nina Web virtual agent delivers accurate answers. They must refine answers, too.

As you might infer by our description, NLU is a black box. Train it with a set of User Questions and it will match customer’s questions with them. The critical tasks for a Nina Web deployment are the initial specification and continuing refinement of User Questions and of answers. Nina Web insulates deployment work from NLU, from the complexity of NLP and statistical machine learning. Analysts do not specify language models or matching rules. They do not (and cannot) configure and/or customize neural network processing. Knowledge management is the focus deployment efforts. That can make for easier and faster deployment, a strength and differentiator for Nina Web.

One more thing. We mentioned that NLU is the analysis and matching technology in Nina IVR and Nina Mobile as well as in Nina Web. One set of User Questions can match customer questions with one set of answers across telephone, web, and mobile channels. Together, Nina IVR, Nina Mobile, and Nina Web can deliver a consistent cross-channel customer self-service experience, but, today, that consistency requires creating and managing three copies of the set of User Questions and three copies of the set of answers because the products are not integrated. Each Nina deploys independently of the others. But, cross-Nina integration is on Nuance’s product roadmap. An integrated, cross-channel Nina will be quite a customer service offering.

Next IT Alme

This week’s report is a product evaluation of Next IT’s virtual agent offering Alme (All me). The report updates our November 28, 2012 product evaluation. Just a reminder, Alme is the software behind about 20 deployments, all for B2C organizations. You’ve might have had some of your travel questions answered by Jenn of Alaska Airlines or Alex of United Airlines. Next IT is one of the pioneers in virtual agent technology. The firm was founded in 2002 in Spokane, WA and introduced its first product in 2004.

Remember that Alme uses Natural Language Processing (NLP) to analyze customers’ question and to match them with answers in its knowledgebase and in external applications. Key components are an NLP engine and a language model. The language model specifies language constructs that adapt the Alme to the lexicon of the deployment’s domain. Analysis of customers’ questions by the engine, using the language model allows Alme’s virtual agents’ answers to be dynamic and personalize-able through the access and analysis of data from external applications.

So what’s new in Alme? Lots. In the year or so since our last evaluation, Next IT has been quite busy. Its developers have made Alme a more attractive, more powerful offering that’s easier to deploy and to manage through significant improvements to its language model and its tools.

  • Language model improvements help virtual agents deliver more accurate and more personalized answers and solutions to customers’ questions and problems. For example, Alme can use information within customers’ questions to establish a context for their “conversations” with virtual agent. This context makes conversations more natural and helps virtual agents deliver answers and solutions more quickly. Also, Alme now has a new conversational model that helps virtual agents perform complex tasks for customers. And, another new language model feature helps virtual agents handle ambiguous questions and questions that contain idiomatic phrases.
  • New and improved tools make virtual agents faster and easier to deploy and manage and make Next IT’s clients more self-sufficient. In our previous evaluation, we had identified limitations in change management and team support. Next IT has addressed those limitations quite nicely in the tools of the current version. Also, the new Response Management toolset decouples the complex work of language model design, specification, and maintenance from simpler content/knowledge management work. As a result, organizations that license Alme can do more of the work to deploy and manage Alme virtual agents and become less dependent of Next IT professional services.

Alme’s key strength and most significant differentiator has been its capability to deliver very sophisticated answers to complex questions. Language model improvements make Alme stronger. For example, healthcare companies might use the new conversation model to collect the information required to complete an insurance application, a referral to a specialist, or a follow-up reminder to a prescription. On the topic of healthcare, Next IT has begun a major and very timely initiative in that market segment. On October 10, 2013, the firm announced Alme for Healthcare. Alme for Healthcare uses all of the new language model capabilities, especially the new conversational model for both of its applications—a clinical application that helps inform, coach, and engage patients and an administrative application that helps patients and administrative/support staff with forms, processes, and information retrieval. Look for announcements about the companies using Alme for Healthcare soon.

Improved tools make Alme more attractive and more competitive. Time and cost to deployment have been issues for all customer service applications. Deploying virtual agent products has been particularly expensive because language models are complex, domain-specific, deployment-specific, and proprietary. Companies that license virtual agent software depend on their suppliers to design, specify, implement, test, and manage language models and knowledgebases. Time to deployment can be pretty long, approaching a year in some cases. Next IT has provided all the services for initial virtual agent deployment and ongoing management. Some of its customers use those services. However, new tools and tools improvements give customers the opportunity to do much of this work themselves and give Next IT’s professional services consultants the facilities that speed and simplify the tasks that they perform for customers. The results: shortened time and reduced cost to deployment, faster ROI, and faster and easier ongoing management.

Virtual agents have become far more than avatars and FAQs in a box on your support page. Alme demonstrates and proves that virtual agents can do serious customer service work and Next IT continues to make Alme more attractive. A virtual agent should be an integral component of every customer service application portfolio.

IntelliResponse VA

Accurate Answers with Fast and Easy Deployment

We’ve just published our Product Review of IntelliResponse Virtual Agent (IntelliResponse VA), the virtual assisted-service offering from IntelliResponse Systems, Inc., a privately held supplier founded in 2000 and based in Toronto, ON Canada. The report completes our latest research series on virtual agents/virtual assisted-service.

We’ve published evaluations of the four leading virtual agent offerings:

  • Creative Virtual V-Person
  • IntelliResponse VA
  • Next IT Active Agent
  • Nuance Nina Web (VirtuOz Intelligent Virtual Agent when we published. Nuance acquired VirtuOz earlier this year.)

Virtual agents implemented on all four can deliver a single answer to a customer’s question on web, mobile, and social channels. Expect the answer to be correct about 90 percent of the time.

Virtual agents deliver bottom line benefits. They can lower cost to serve as compared to live agents and they can improve customer sat by improving the speed, accuracy, and consistency of the answers to customers’ questions.

 Contrast virtual agents with search and knowledgebase approaches that deliver many answers and leave it to the customer to pick the correct one. This single correct answer makes virtual agents useful for answering many kinds of customers’ questions, certainly customer service questions but also questions about your business and about your business policies, processes, and practices, about your products, and everything about your customers’ relationships with you—accounts, orders, bills, and passwords, for example. They can be your agents for marketing, for sales, and for service.

 Like your live agents, it takes time and effort to get virtual agents ready to engage with your customers. You have to give them the knowledge about the business areas that they support. You have to train them to understand your customers’ questions and to correlate or match those questions with the correct answers. The knowledge is contained in/represented by the items in their knowledgebases, their store of predefined answers. Anticipate the questions that your customers will ask, specify the answers, and store them in the virtual agent’s knowledgebase. All four virtual agent products have knowledgebases and provide tools and facilities for creating and managing answers. Your customers’ questions will change and evolve with their relationships and with changes to your offerings of products and services and to your business. Virtual agent’s knowledge has to keep up with those changes (just like live agents’ knowledge).

Training virtual agents to understand your customers’ questions and to correlate/match them to correct answers is the harder part. Virtual agents use very sophisticated and complex technology to analyze customers’ questions and to match them with the answers in their knowledgebases. Analysis and matching is the core processing that virtual agents perform. Analysis and matching technology is the virtual agent supplier’s core IP, its secret sauce. Each of the four has patented some or all of this technology. The suppliers want you to appreciate the sophistication and power of the technology. They don’t give you much detail of what it does or how it works.

(We describe and evaluate how a virtual agent analyzes and matches questions in our product review. We actually read many of the suppliers’ patents to help us understand the technology. In our reports, we describe it a bit, but we focus on what you’ll have to do to use it effectively.)

Creative Virtual V-Person, Next IT Active Agent, and Nuance Nina Web use Natural Language Processing (NLP) technology for their analysis and matching. Each has its own NLP implementation. NLPs perform computational linguistic analyses on customers’ questions, parsing for subjects, verbs, object, and qualifiers, extracting entities, identifying actors and roles, and codifying relationships. This is sophisticated and complex processing.

For a successful virtual agent deployment, you provide critical input to your virtual agent supplier’s NLPs, for example:

  • Words that your customers will likely include in their questions
  • Misspellings, typos, slang, idioms, ad stems for those words
  • Conditions/rules/expressions for how your customers combine words into phrases
  • Parameters for configuring the NLP processing

If you don’t specify the actual words and their various alternative forms that your customers use in their questions, then your virtual agent cannot deliver answers. Complete specification of your customers’ vocabularies is critical. Virtual agent products help considerably with packaged dictionaries of common industry and application terms, but it’s on you to provide the vocabulary specific to your business and your products.

Virtual agent suppliers also provide consulting services to help the NLP specification. These services are essential for a successful virtual agent deployment. These services are also essential for ongoing management of your virtual agent. Remember that customers’ questions are always changing. So is your business.

IntelliResponse VA uses machine learning technology for its analysis and matching. Machine learning is an algorithmic approach. The algorithm learns by training it with sample data in a controlled environment. It applies its learning when it goes live. The sample data that you provide to train an IntelliResponse VA virtual agent are the typical questions that you want it to answer, not the words and phrases in those questions, not various forms of those words, not the relationships between them, just the questions. IntelliResponse VA can do the rest of the work, even to accommodate the ongoing changes in customers’ questions and in your business. IntelliResponse VA virtual agent deployment is easier and faster than NLP-based deployments and delivers answers with the same level of accuracy. Read our report for the details and note that IntelliResponse can also provide those consulting services to help you deploy and manage virtual agents. The details of the work will be a bit different and there will be less work to do, but the objective will be the same—deploying virtual agents that answer customers’ questions.