Virtual Assistant Update

 

We recently published “Virtual Assistant Update.” It’s a broad and not too deep update on virtual assistant technologies, products, suppliers, and markets from the perspective of the five leading suppliers: [24]7, Creative Virtual, IBM, Next IT, and Nuance. These are the leaders because they:

  • Have been in the virtual assistant business for some time (from 16 years for [24]7 via its acquisition of IntelliResponse to four years for IBM).
  • Have attractive and useful virtual assistant technology
  • Offer virtual assistant products that are widely used and well proven.
  • Want to be in the virtual assistant business and have company plans and product plans to continue.

The five suppliers are quite diverse. There’s the public $80 billion IBM and the public $2 billion Nuance. Then there are the private [24]7, a venture backed company big on acquisitions and the more closely held Creative Virtual and Next IT. Despite these big corporate-level differences, the five’s virtual assistant businesses are quite similar. Roughly they’re all about same size and the five compete as equals to acquire and retain virtual assistant business.

By the way, across the past 12 to 24 months, business has been good for all of the five suppliers. Customer growth has been very good across the board. Our suppliers have expanded into new markets and have introduced new and/or improved products.

Natural Language Processing and Machine Learning

Technologies are quite similar, too. All five have built their virtual assistant offerings with the same core technologies: Natural Language Processing (NLP) and machine learning.

Virtual Assistants use NLP to recognize intents of customer requests. NLP implementations usually comprise an engine that processes customer requests using an assortment of algorithms to parse and understand the words and phrases in a customer’s request. An NLP engine’s processing is guided by customizable and/or configurable deployment-specific mechanisms such as language models, grammars, and rules. These mechanisms accommodate the vocabularies of a deployment’s business, products, and customers.

Virtual assistants use machine learning technology to match actual customer requests with anticipated customer requests and then to select the content or execute the logic associated with the anticipated requests. (Machine learning algorithms learn from and then make predictions on data. Algorithms learn from training. Analysts/scientists train them with sample, example, or typical deployment-specific input then with feedback or supervision on correct and incorrect predictions. A trained algorithm is a deployment-specific machine learning model. The accuracy of models can improve with additional and continuing training. Some machine learning implementations are self-learning.)

Complex and Sophisticated Work: Consultant-led or Consultant-assisted

The work to adapt NLP and machine learning technology implementations for virtual assistant deployments is sophisticated and complex. This is work for experts: scientists, analysts, and developers in languages, data, and algorithms. The approach to this is work differentiates virtual assistant suppliers and products. The approach drives virtual assistant product selection. Here’s what we mean.

All the virtual assistant suppliers have built tools and package predefined resources to make the work simpler, faster, and more consistent. Some suppliers have built tools for the experts and these suppliers have also built consulting organizations with the expertise to use their tools. Successful deployments of their virtual assistant offerings are consultant-led. They require the services of the suppliers’ (or the suppliers’ partners’) consulting organizations.

Some suppliers have built tools that further abstract the work and make it possible for analysts, business users, and IT developers to deploy. While these suppliers have also built consulting organization with expertise in virtual assistant technologies and in their tools, successful deployments of their virtual assistant offerings are consultant-assisted and may even approach self-service.

So, a key factor in the selection of a virtual assistant product is deployment approach: consultant-led or consultant-assisted. Creative Virtual, Next IT, and Nuance offer consultant-led virtual assistant deployments. [24]7 and IBM offer consultant-assisted deployments. For example, IBM Watson Virtual Agent includes tools that make it easy to deploy virtual assistants. In the Figure below, we show the workspace wherein analysts specify the virtual assistant’s response to the customer request to make a payment. Note that the possible responses leverage content, tool, and facilities packaged with the product.

ibm watson va illos

© 2017 IBM Corporation

Illustration 7. This Illustration shows the Watson Virtual Agent workspace for specifying responses from the bot/virtual assistant.

 

Which is the better approach? Consultant-assisted is our preference, but we’ve learned over our long years of research and consulting that deployment approach is a function of corporate, style, personality, and culture. Some businesses and organizations give consultants the responsibility for initial and ongoing technology deployments. Some businesses want to do it themselves. For virtual assistant software, corporate style could very well be a key factor in product selection.

 

 

 

 

Advertisements

Microsoft Dynamics 365 for Customer Service

Serious Customer Service Capabilities

In our more than 10 years of customer service research, publishing, and consulting, we’d never before published a report about a Microsoft offering. It’s not because Microsoft hasn’t had a customer service offering or that the company hasn’t had success in business applications. Since 2003, its CRM suite has always included a customer service app. And, its Dynamics CRM brand has built a customer base of tens of thousands of accounts and millions of users. But, Dynamics CRM had always been more about its sales app and that app’s integration with Office and Outlook. Customer service capabilities have been a bit limited. No longer.

Beginning in November 2015, the improvements in two new releases—CRM 2016 and CRM 2016 Update 1—and, in November 2016, the introduction of the new Dynamics 365 brand have strengthened, even transformed, Microsoft’s customer service app and have made Microsoft a player to consider in the high end of the customer service space.

Our Product Evaluation Report on Microsoft Dynamics 365 for Customer Service, published December 1, 2016, will help that consideration. These are the new and/or significantly improved customer service components:

  • Knowledge management
  • Search
  • Customer service UI
  • Web self-service and communities
  • Social customer service

Let’s take a closer but brief look at each of them.

Knowledge Management

Knowledge Management is the name of a new customer service component. Introduced with CRM 2016, it’s a comprehensive knowledge management system with a rich and flexible knowledge model, a large set of useful knowledge management services, and an easy to learn and easy to use toolset. The best features of Knowledge Management are:

  • Visual tools of Interactive Service Hub, the customer service UI
  • Knowledge lifecycle and business processes that implement and support the lifecycle
  • Language support and translation
  • Version control
  • Roles for knowledge authors, owners, and managers

For example, Knowledge Management comes with a predefined but configurable knowledge lifecycle with Author, Review, Publish, and Expire phases. The screen shot in Figure 1 shows the steps in the Author phase.

ish-knowledge-author-stage-stepsFigure 1. This screen shot shows the steps in the Author phase of the knowledge management process.

Note that Knowledge Management is based on technology from Parature, a Reston, VA-based supplier with a customer service offering of the same name that Microsoft acquired in 2014. Beginning with the introduction of Dynamics 365, Microsoft no longer offers the Parature customer service product.

Search

Search is not a strength of Dynamics 365. Search sources are limited. Search query syntax is simple. There are few search analyses and few facilities for search results management. However, with the Dynamics 365 rebranding Microsoft has made improvements. Categorized Search, the new name of the search facility in Dynamics 365, retrieves database records with fields that begin with the words in search queries and lets administrators and seekers facet (Categorize) search results. The new Relevance Search adds relevance and stemming analyses. Microsoft still has work to do, but faceting, stemming, and relevance are a start to address limitations.

Customer Service UI – Interactive Service Hub

Interactive Service Hub (ISH) provides several useful and very attractive capabilities in Dynamics 365. It’s the UI for Knowledge Management, one of two UIs for case management, and a facility for creating and presenting dashboards. For the case management and knowledge management UIs, ISH provides visual tools that are easy to learn and easy to use. The tools let agents perform every case management task and let authors and editors perform every knowledge management function. For example, Figure 2 shows a screen shot of ISH’s presentation of an existing Case—the Name of the Case at the top left, the Case information to display “SUMMARY | DETAILS | CASE RELATIONSHIPS | SLA” under the Name, the phases of the deployment’s case management process “IDENTIFY QUALIFY RESEARCH RESOLVE” within a ribbon near the top of the screen, and the (SUMMARY) Case information in the center.

ish-existing-caseFigure 2. This screen shot shows the Interactive Service Hub display of an existing Case.

In addition to tools for building dashboards, ISH also packages useful predefined dashboards, two for case management and two for knowledge management. The four help customer service managers and agents and knowledge management authors and editors manage their work. Figure 3 shows an example of the My Knowledge Dashboard. It presents information useful to authors and editors very visually and interactively.

my-knowledge-dashboardFigure 3. This screen shot shows an example of the My Knowledge Dashboard.

Web Self-service and Communities

We were quite surprised to learn that, prior to the May 2016 introduction of CRM 2016 Update 1, Dynamics 365 for Customer Service and all of its predecessor products did not include facilities for building and deploying web self-service or communities sites. This limitation was addressed in Update 1 with the then named CRM Portal service, renamed the Portal service in Dynamics 365. Portal service is a template-based toolkit for developing (web development skills are required) and deploying browser-based web self-service and communities/forums sites. It’s based on technology from Adxstudio, which Microsoft acquired in September 2015 and it packages templates for a Customer Service Portal and a Community Portal. Note that Dynamics 365 for Customer Service licenses include one million page views per month for runtime usage of sites built on the Portal service (licenses may be extended with additional page views per month).

Social Customer Service

Microsoft Social Engagement is a separately packaged and separately priced social customer service offering that Microsoft introduced early in 2015. Social Engagement provides facilities that listen for social posts across a wide range of social sources (Instagram, Tumblr, WordPress, and YouTube as well as Facebook and Twitter), that analyze the content and sentiment of those posts, and that interact with social posters. In addition, Social Engagement integrates with Dynamics 365 for Customer Service. Through this integration, the automated or manual analysis of social posts can result in creating and managing customer service Cases. It’s a strong social customer service offering. What’s new is Microsoft bundles Social Engagement with Dynamics 365 for Customer Service. That’s a very big value add.

All This and More

We’ve discussed the most significant new and improved capabilities of Dynamics 365 for Customer Service. Knowledge Management, Interactive Service Hub, improved Search, the Portal service, and bundled Social Engagement certainly strengthen the offering. Although not quite as significant, Microsoft added and improved many other capabilities, too. For example, there are language support improvements, improvements to integration with external apps, new Customer Survey and “Voice of the Customer” feedback capabilities, and the use of Azure ML (Machine Learning) to suggest Knowledge Management Articles as Case resolutions automatically based on Case attribute values. Bottom line, Microsoft Dynamics 365 for Customer Service deserves serious consideration as the key customer service app for large businesses and public sector organizations, especially those that are already Microsoft shops.

Evaluating Customer Service Products

Framework-based, In-depth Product Evaluation Reports

We recently published our Product Evaluation Report on Desk.com, Salesforce’s customer service offering for small and mid-sized businesses. “Desk” is a very attractive offering with broad and deep capabilities. It earns good grades on our Customer Service Report Card, including Exceeds Requirements grades in Knowledge Management, Customer Service Integration, and Company Viability.

We’re confident that this report provides input and guidance to analysts in their efforts to evaluate, compare, and select those customer service products, and we know that it provides product assessment and product planning input for its product managers. Technology analysts and product managers are the primary audiences for our reports. We research and write to help exactly these roles. Like all of our Product Evaluation Reports about customer service products that include multiple apps—case management, knowledge management, web self-service, communities, and social customer service—it’s a big report, more than 60 pages.

Big is good. It’s their depth and detail that makes them so. Our research for them always includes studying a product’s licensed admin, user, and, when accessible, developer documentation, the manuals or online help files that come with a product. We read the patents or patent applications that are a product’s technology foundation. Whenever offered, we deploy and use the products. (We took the free 30-day trial of Desk.) We’ll watch suppliers’ demonstrations, but we rely on the actual product and its underlying technologies.

On the other hand, we’ve recently been hearing from some, especially product marketers when they’re charged to review report drafts (We never publish without the supplier’s review.), that the reports are too big. Okay. Point taken. Perhaps, tt is time to update our Product Evaluation Framework, the report outline, to produce shorter, more actionable reports, reports with no less depth and detail but reports with less descriptive content and more salient analytic content. It’s also time to tighten up our content.

Product Evaluation Reports Have Two Main Parts

Our Product Evaluation Reports have had two main parts: Customer Service Best Fit and Customer Service Technologies. Customer Service Best fit “presents information and analysis that classifies and describes customer service software products…speed(ing) evaluation and selection by presenting easy to evaluate characteristics that can quickly qualify an offering.” Customer Service Technologies examine the implementations of a product’s customer service applications and their foundation technologies as well as its integration and reporting and analysis capabilities. Here’s the reports’ depth and detail (and most of the content). Going forward, we’ll continue with this organization.

Streamlining Customer Service Best Fit

We will revamp and streamline Customer Best Fit, improving naming and emphasizing checklists. The section will now have this organization:

  • Applications, Channels, Devices, Languages
  • Packaging and Licensing
  • Supplier and Product
  • Best Prospects and Sample Customers
  • Competitors

Applications, Channels, Devices, Languages are lists of key product characteristics, characteristics that quickly qualify a product for deeper consideration. More specifically, applications are the sets of customer service capabilities “in the box” with the product—case management, knowledge management, and social customer service, for example. Channels are assisted-service, self-service, and social. We list apps within supported channels to show how what’s in the box may be deployed. Devices are the browsers and mobile devices the product supports for internal users and for end customers. Languages are two lists: one for the languages in which the product deploys and supports for its administration and internal users and one for the languages it supports for end customers.

Packaging and Licensing presents how the supplier offers the product, the fees that it charges for the offerings, and the consulting services available and/or necessary to help licensees deploy the offerings.

 Supplier and Product present high level assessments of the supplier’s and the product’s viability. For the supplier, we present history, ownership, staffing, financial performance, and customer growth. For the product, we present history, current development approach, release cycle, and future plans.

Best Prospects and Sample Customers are lists of the target markets for the product—the industries, business sizes, and geographies wherein the product best fits. This section also contains the current customer base for the product, a list of typical/sample customers within those target markets and, if possible, presents screen shots of their deployments.

 Competition lists the product’s closest competitors, its best alternatives. We’ll also include a bit of analysis explaining what make them the best alternatives and where the subject product has differentiators.

Tightening-up Customer Service Technologies

Customer Service Technologies is our key value-add and most significant differentiator of our Product Evaluation Reports. It’s why you should read our reports, but, as we mentioned, it’s also the main reason why they’re big.

We’ve spent years developing and refining the criteria of our Evaluation Framework. They criteria are the results of continuing work with customer service products and technologies and our complementary work the people who are product’s prospects, licensees, suppliers, and competitors. We’re confident that we evaluate the technologies of customer service products by the most important, relevant, and actionable criteria. Our approach creates common, supplier-independent and product-independent analyses. These analyses enable the evaluation and comparison of similar customer service products and results in faster and lower risk selection of a product that best fits a set of requirements.

However, we have noticed that the descriptive content that are the bases for our analyses has gotten a bit lengthy and repetitive (repeating information in Customer Best Fit). We plan to tighten up Customer Service Technologies content and analysis in these ways:

  • Tables
  • Focused Evaluation Criteria
  • Consistent Analysis
  • Reporting

Too much narrative and analysis has crept into Tables. We’ll make sure that Tables are bulleted lists with little narrative and no analysis.

Evaluation criteria have become too broad. We’ve been including detailed descriptions and analyses of related and supported resources along with resources that’s the focus of the evaluation. For example, when we describe and analyze the details of a case model, we’ll not also describe and analyze the details of user and customer models. Rather we’ll just describe the relationship between the resources.

Our analyses will have three sections. The first will summarize what’s best about a product. The second will present additional description and analysis where Table content needs further examination. The third will be “Room for Improvement,” areas where the product is limited. This approach will make the reports more actionable and more readable as well as shorter.

In reporting, we’ll stop examining instrumentation, the collection and logging of the data that serves as report input. The presence (or absence) of reports about the usage and performance of customer service resources is really what matters. So, we’ll call the criterion “Reporting” and we’ll list the predefined reports packaged with a product in a Table. We’ll discuss missing reports and issues in instrumentation in our analysis.

Going Forward

Our Product Evaluation Report about Microsoft Dynamics CRM Online Service will be the first to be written on the streamlined Framework. Expect it in the next several weeks. Its Customer Service Best Fit section really is smaller. Each of its Customer Service Technologies sections is smaller, too, more readable and more actionable as well.

Here’s the graphic of our Product Evaluation Framework, reflecting the changes that we’ve described in this post.

Slide1

Please let us know if these changes make sense to you and please let us know if the new versions of the Product Evaluation Reports that leverage them really are more readable and more actionable.

The Helpdesks: Desk.com, Freshdesk, Zendesk

We’ve added our Product Evaluation Report on Freshdesk to our library of in-depth, framework-based reports on customer service software. We put this report on the shelf, so to speak, next to our Product Evaluation Reports on Desk.com and Zendesk. The three products are quite a set. They’re similar in many ways, remarkably so. Here are a few of those similarities:

The products are “helpdesks,” apps designed to provide an organization’s customers (or users) with information and support about the organization’s products and services. Hence, their names are (alphabetically) Desk.com, Freshdesk, and Zendesk.

They have the same sets of customer service apps and those apps have very similar capabilities: case management, knowledge management and community/forum with a self-service web portal and search, social customer service supporting Facebook and Twitter, chat, and telephone/contact center. Case management is the core app and a key strength for all of the products. Each has business rules-based facilities to automate case management tasks. On the other hand, knowledge management and search are pretty basic in all of them.

The three also include reporting capabilities and facilities for integrating external apps. Reporting has limitations in all three. Integration is excellent across the board.

These are products that deploy in the cloud. They support the same browsers and all three also have native apps for Android and iOS devices.

All three are packaged and priced in tiers/levels/editions of functionality. Their licensing is by subscription with monthly, per user license fees.

Simple, easy to learn and easy to use, and cross/multi/omni-channel are the ways that the suppliers position these offerings. Our evaluations were based on trial deployments for each of the three products. We found that all of them support these positioning elements very well.

Small (very small, too) and mid-sized businesses across industries in all geographies are their best fits, although the suppliers would like to move up market. The three products have very large customer bases—somewhere around 30,000 accounts for Desk.com and Zendesk and more than 50,000 accounts for Freshdesk per a claim in August from Freshdesk’s CEO. Note that Desk.com was introduced in 2010, Freshdesk in 2011, and Zendesk in 2004.

Suppliers’ internal development organizations design, build, and maintain the products. All three suppliers have used acquisitions to extend and improve product capabilities.

While the products are similar, the three suppliers are quite different. Salesforce.com, offers Desk.com. Salesforce is a publicly held, San Francisco, CA based, $8 billion corporation founded in 1999. Salesforce has multiple product lines. Freshdesk Inc., offers Freshdesk. It’s a privately held corporation founded in 2010 and based in Chennai, India. Zendesk, Inc. offers Zendesk. This company was founded in 2007 in Denmark and reincorporated in the US in 2009. It’s publicly held and based in San Francisco, CA. Revenues in 2015 were more than $200 million.

These differences—public vs. private, young vs. old(er), large vs. small(er), single product line vs. multiple product line—will certainly influence many selection decisions. However, all three are viable suppliers and all three are leaders in customer service software. The supplier risk in selecting Desk.com, Freshdesk, or Zendesk is small.

Then, where are the differences that result in making a selection decision? The differences are in the ways that the products’ developers have implemented the customer service applications. The differences become clear from actually using the products. Having actually used all three products in our research, we’ve learned the differences and we’ve documented them in our Product Evaluation Reports. Read them to understand the differences and to understand how those differences match your requirements. There’s no best among Desk.com, Freshdesk, and Zendesk but one of them will be best for you.

For example, here’s the summary of Freshdesk evaluation, the grades that the product earned on our Customer Service Report Card. “Freshdesk earns a mixed Report Card—Exceeds Requirements grades in Capabilities, Product Management, Case Management, and Customer Service Integration, Meets Requirements grades in Product Marketing, Supplier Viability, and Social Customer Service, but Needs Improvement grades in Knowledge Management, Findability, and Reporting and Analysis.”

Case Management is where Freshdesk has its most significant differences, differences from its large set of case management services and facilities, its support for case management teams, its automation of case management tasks, and its easy to learn, easy to use case management tools. For example, Arcade is one of Freshdesk’s facilities for supporting case management teams. Arcade is a collection of these three, optional gamification facilities that sets and tracks goals for agents’ customer service activities.

  • Agents earn Points for resolving Tickets in a fast and timely manner and lose points for being late and for having dissatisfied customers, accumulating points toward reaching six predefined skill levels.
  • Arcade lets agents earn “trophies” for monthly Ticket management performance. In addition,
  • Arcade awards bonus points for achieving customer service “Quests” such as forum participation or publishing knowledgebase Solutions.

Arcade lets administrators configure Arcade’s points and skill levels. Its Trophies and Quests have predefined goals; however, administrators can set Quests on or off. The Illustration below shows the workspace that administrators use to configure Points.

arcade points

Freshdesk can be a Customer Service Best Fit for many small and mid-sized organizations. Is it a Best Fit for your? Read our Report to understand why and how.

Nuance Nina Virtual Assistants

We evaluated Nina, the virtual assistant offering from Nuance, for the third time, publishing our Product Evaluation Report on October 29, 2015. This Report covers both Nina Mobile and Nina Web.

Briefly, by way of background, Nina Mobile provides virtual assisted-service on mobile devices. Customers ask questions or request actions of Nina Mobile’s virtual assistants questions by speaking or typing them. Nina Mobile’s virtual assistants deliver answers in text. Nina Mobile was introduced in 2012. We estimate that approximately 15 Nina Mobile-based virtual assistants have been deployed in customer accounts.

Nina Web provides virtual assisted-service through web browsers on PCs and on mobile devices. Customers ask questions or requests actions of Nina Web’s virtual assistants questions by typing them into text boxes. Nina Web’s virtual assistants deliver answers or perform actions in text and/or in speech. Nina Web was introduced as VirtuOz Intelligent Virtual Agent in 2004. Nuance acquired VirtuOz in 2013. We estimate that approximately 35 Nina Web-based virtual assistants have been deployed in customer accounts.

The two products now have common technologies, tools, and a development and deployment platform. That’s a big deal. They had been separate and pretty much independent products, sharing little more than a brand. Nuance’s development team has been busy and productive. Nina also has many new and improved capabilities. Most significant are a new and additional toolset that supports key tasks in initial deployment and ongoing management, PCI (Payment Card Industry) certification, which means that Nina virtual assistants can perform ecommerce tasks for customers, support for additional languages, and packaged integrations with chat applications.

Nina Evaluation Process

We did not include an evaluation of Nina’s Ease of Evaluation. Our work on the Nina Product Evaluation Report was well underway before we added that criterion to our framework. So, we’ll offer that evaluation here.

For our evaluation, we used:

  • Product documentation, which was provided to us by Nuance under an NDA
  • Demonstrations, especially of new tools and functionality, conducted by Nuance product management staff
  • Web content of nuance.com
  • Online content of Nina deployments
  • Nuance’s SEC filings
  • Discussions with Nuance product management and product marketing staff
  • Thorough (and very much appreciated) review of report draft

We also leveraged our knowledge of Nina, knowledge that we acquired in our research for two previously published Product Evaluation Reports from July 2012 and January 2014. We know the product, the underlying technology, and the supplier. So we were able to focus our research on what was new and improved.

Product Documentation

Product documentation, the end user/admin manuals for Nina IQ Studio (NIQS) and the new Nuance Experience Studio (NES) toolsets, was they key source for our research. We found the manuals to be well written and reasonably easy to understand. Samples and examples illustrated simple use cases and supported descriptions very well. Showing more complex use cases, especially for customer/virtual assistant dialogs, would have been very helpful. Personalization facilities could be explained more thoroughly. Also, there’s a bit of inconsistency in terminology between the two toolsets and their documentation.

Nina Deployments

Online content of Nina deployments helped our research significantly. Within the report, we showed two examples of businesses that have licensed and deployed Nina Web are up2drive.com, the online auto loan site for BMW Financial Services NA, LLC and the Swedish language site for Swedbank, Sweden’s largest savings bank. The up2drive Assist box accesses the site’s Nina Web virtual assistant. We asked, “How to I qualify for the lowest rate new car rate?” See the Illustration just below.

up2drive

Online content of Nina Mobile deployments show how virtual assistants can perform actions for customers. For example, we showed how Dom, the Nina Mobile virtual assistant, could help you order pizza from Domino’s in our blog post of May 14, 2015. See https://www.youtube.com/watch?v=noVzvBG0GD0.

Take care when using virtual assistant deployments for evaluation and selection. They’re only as good as the deploying organization wants to make them. Their limitations are almost never the limitations of the virtual assistant software. Every virtual assistant software product that we’ve evaluated has the facilities to implement and deliver excellent customer service experience. Virtual assistant deployments, like all customer experience deployments, are limited by the deploying organization’s investment in them. The level of investment controls which questions they can answer, which actions they can perform, how well they can deal with vague or ambiguous questions and action requests, and their support for dialogs/conversations, personalization, and transactions.

No Trial/Test Drive

Note that Nuance did not provide us with a product trial/test drive of Nina. In fact, Nuance does not offer Nina trials/test drives to anyone. That’s typical of and common for virtual assistant software. Suppliers want easy and fast self-service trials that lead prospects to license their offerings. Virtual assistant software trials are not any of these things. They’re not designed for self-service deployment either for free or for fee.

Why not? Because virtual assistant software is complex. Even its simplest deployment requires building a knowledgebase of the answers to the typical and expected questions that customers ask, using virtual assistant facilities to deal with vague and ambiguous questions, engaging in a dialog/conversation, escalating to chat, or presenting a “no results found” message, for example, and using virtual assistant facilities to perform actions that customers request and deciding how to perform them. (Performing actions will likely require integration apps external to virtual assistant apps.) This is not the stuff of self-service trials and test-drives.

In addition, most virtual assistant suppliers have not yet invested in building tools that speed and simplify the work that organizations must perform for the initial deployment and ongoing management of virtual assistants software even after it has been licensed. Rather, suppliers offer their consulting services instead. (That’s changing for Nuance with toolsets like NES and for several other virtual assistant software suppliers and that’s certainly a topic for a later time.)

Thank You Very Much, Nuance

One more point about Ease of Evaluation. Our research goes into the details of customer service software. We publish in-depth Product Evaluation Reports. We demand a significant commitment from suppliers to support our work. Nuance certainly made that commitment and made Nina Easy to Evaluate for us. We so appreciate Nuance’s support and the time and effort taken by its staff.

Nina was very easy for us to evaluate. The product earns a grade of Exceeds Requirements in Ease of Evaluation.

Zendesk, Customer Service Software That’s Easy to Evaluate

Zendesk Product Evaluation

Zendesk is the customer service offering from Zendesk, Inc. a publicly held, San Francisco, CA based software supplier with 1,000 employees that was founded in 2004. The product provides cloud-based, cross-channel case management, knowledge management, communities and collaboration, and social customer service capabilities across assisted-service, self-service, and social customer service channels.

We evaluated Zendesk against our Evaluation Framework for Customer Service and published our Product Evaluation Report on October 22. Zendesk earned a very good Report Card—Exceeds Requirements grades in Product History and Strategy, Case Management, and Customer Service Integration, and Meets Requirements grades for all other criteria but one, Social Customer Service. Its Needs Improvement grade in Social Customer Service is less an issue with packaged capabilities than it is a requirement for a specialized external app designed for and positioned for wide and deep monitoring of social networks.

Evaluation Framework

Our Evaluation Framework considers an offering’s functionality and implementation, what a product does and how it does it. It also considers the supplier and the supplier’s product marketing (positioning, target markets, packaging and pricing, competition) and product management (release history and cycle, development approach, strategy and plans) for the offering.

We rely on the supplier for product marketing and product management information. First we gather that info from the supplier’s website and press releases and, if the supplier is publicly held, from the supplier’s SEC filings. We speak directly with the supplier for anything else in these areas.

For functionality and implementation, the supplier typically gives us (frequently under NDA) access to the product’s user and developer documentation, the manuals and help files that licensees get. In this era of cloud computing, we’ve been more and more frequently getting access to the product, itself, through online trials. We also read any supplier’s patents and patent applications to learn about the technology foundation of functionality and implementation.

In addition, we entertain the supplier’s presentations and demonstrations. They’re useful to get a feel for the style of the product and the supplier and to understand future capabilities. However, to really understand the product, there’s no substitute for actual usage (where we drive) and/or documentation.

Our research process includes insisting that the supplier reviews and provides feedback on a draft of the Product Evaluation Report. This review process ensures that we respect any NDA, improves the accuracy and usefulness of the information in the report, and prevents embarrassing the supplier and us.

Ease of Evaluation, a New Evaluation Criterion

Our frameworks have never had an Ease of Evaluation criterion. We’ve always figured that we’d do the work to make your evaluation and selection of products easier, faster, and less costly. Our evaluation of Zendesk has us rethinking that. We’ve learned that our Product Evaluation Reports can speed and shorten your evaluation and selection process but that your process doesn’t end with our reports. You do additional evaluation, modifying and extending our criteria or adding criteria for criteria to represent requirements specific to your organization, your business, and/or application for a product. Understanding Ease of Evaluation can further speed and shorten your evaluation and selection process.

So, beginning with our next Product Evaluation Report, you’ll find that Ease of Evaluation criterion in our framework.

Zendesk Was Very Easy to Evaluate

By the way, Zendesk would earn an Exceeds Requirements grade for Ease of Evaluation. We did a 30-day trial of the product. We signed-up for the trial online—no waiting. During the trial we submitted cases to Zendesk Support and we used the Zendesk community forums. In addition, Zendesk.com provided a wealth of detailed information about the product, including technical specifications and a published RESTful API.

Scroll down to the bottom of Zendesk.com’s home page to see a list of UNDER THE HOOD links.

under the hood

Looking at the UNDER THE HOOD links in a bit more detail:

  • Apps and integrations is a link to a marketplace for third party apps. Currently there are more than 300 of them.
  • Developer API is a link to the documentation of Zendesk’s RESTful, JavaScript API. It lists and comprehensively describes more than100 services.
  • Mobile SDK is a link to documentation for Android and iOS SDKs and for the Web Widget API. (The Web Widget embeds Zendesk functionality such as ticketing and knowledgebase search in a website.)
  • Security is a link to descriptions of security-related features descriptions lists of Zendesk’s security compliance certifications and memberships.
  • Tech Specs is a link to a comprehensive collection of documents that describe Zendesk’s functionality and implementation.
  • What’s new is a link to high-level descriptions of recently added capabilities
  • Uptime is a link to info and charts about the availability of Zendesk Inc.’s cloud computing infrastructure
  • Legal is a link to a description of the Terms of Service of the Zendesk offering

We spent considerable time in Tech Specs and Developer API. We found the content to be comprehensive, well organized and easy to access, and well written. The combination of the product trial and UNDER THE HOOD made Zendesk easy to evaluate. And, we did not have to sign an NDA for access to any of this information.

Many suppliers make their offerings as easy to evaluate as Zendesk, Inc. made Zendesk for us. On the other hand, many suppliers are not quite so willing to share detailed information about their products and, especially their underlying technologies. Products and technologies are, after all, software suppliers’ key IP. They have every right to protect this information. They don’t feel that patent protection is enough. Their offerings are much harder to evaluate at the level of our Product Evaluation Reports.

Consider Products That Are Easy to Evaluate

We feel as you should feel that in-depth evaluations are essential to the selection of customer service products. You’ll be spending very significant time and money to deploy and maintain these products. You should never rely on supplier presentations and demonstrations to justify those expenditures. Certainly rely on our reports and use them as the basis for your further, deeper evaluation, including our new Ease of Evaluation criterion. Put those suppliers that facilitate these evaluations on your short lists.

Next IT Alme: Helping Customers Do All Their Work

On September 2, 2004, we published my article, “May I Help You?” It was a true story about my experience as a boy working in my dad’s paint and wallpaper store. The experience taught me all about customer service.

The critical lesson that I learned from my dad and from working in the store was customers want and need your help for every activity that they perform in doing business with you from their first contact with you through their retirement.

That help was answering customers’ questions and solving customers’ problems. That’s the usual way that we think of customer service, helping with exceptions, the times that customers can not do their work. But, that help was also performing “normal” activities on customers’ behalves—providing the right rollers, brushes, and solvents for the type of paint they wanted to use, for example, or collaborating with customers to perform normal activities together—selecting a paint color for trim or a wallpaper pattern.

At Kramer’s Paint, my dad or I delivered all of that help—normal work and exceptions work. In your business, you deliver the help to perform customers’ normal planning, shopping, buying, installing/using, and (account) management activities through the software of self-service web sites and/or mobile apps or through the live interactions of your call center agents, in-store associates, or field reps. And, you deliver the help for customers’ exception activities through customer self-service apps on the web, social networks, or mobile devices or through the live interactions of customer service staff in call centers, stores, and in the field.

Virtual Assistants Crossover to Perform Normal Activities

Recently, in our on customer service research, we’ve begun to see virtual assistant software apps crossover from helping customers not only with the exception activities to performing normal activities on customers’ behalves, activities like taking orders, completing applications, and managing accounts. We wrote about this crossover a bit in our last post about IBM Watson Engagement Advisor’s Dialog facility. And, we provided links to crossover examples of Creative Virtual V-Person at Chase Bank and Nuance Nina Mobile at Domino’s.

Alme, the virtual assistant software app from Spokane, WA based supplier Next IT, can crossover to help customers perform normal, too. In fact, Alme has always performed normal activities for customers. One of our first reports about virtual assistants, a report that we published on March 13, 2008, discussed Jenn, Alaska Airlines’ Alme-based virtual assistant. We asked Jenn to find a flight for us through this request, “BOS to Seattle departing December 24 returning January 1.” Jenn did a lot of work to perform this normal activity. Her response was fast, accurate, and complete. We asked Jenn again in our preparation for this post. “She” prepared the “Available Flights” page for us. Once again, her answer was fast, accurate, and complete. All that’s left to do is select the flights. The illustration below shows our request and Jenn’s response.

alaska airlines blog

Next IT Alme Provides Excellent Support for Normal Activities

Alme provides these excellent facilities for performing normal activities, facilities that are one of its key strengths and competitive differentiators:

  • Support for complex, multi-step interactions
  • Rules-based personalization
  • Integration with external applications
  • Let’s take a closer look at them.

Support for Complex, Multi-Step Interactions

For normal activities, complex, multi-step interactions help virtual assistants collect the information needed to complete an insurance or loan application, order a meal, or configure a mobile device and the telecommunications services to support it, for example. Alme supports complex, multi-step interactions with Directives and Goals.

Directives

Directives are hierarchical dialogs of prompt and response interactions between Alme virtual assistants and customers. They’re stored and managed in Alme’s knowledgebase and Alme provides tools for building and maintaining them. Directive’s dialogs begin when Alme’s processing of a customer’s request matches the request to one of the nodes in a Directive. The node presents its prompt to the customer as a text box into which the customer enters a text response or as a list of links from which the customer makes a selection. Alme then processes the text responses or the link selections. This processing moves the dialog:

  • To another node in the Directive
  • Out of the Directive
  • Into a different Directive.

That customers’ requests can enter, reenter, or leave Directives at any of their nodes is what makes Directives powerful, flexible, and very useful. Alme’s analysis and matching engine processes every customer request and response to Directive prompts the same way. When the request (re)triggers a Directive, Alme automatically (re)establishes the Directive’s context, including all previous text responses and link selections. For example, financial services companies might use Directives to implement retirement planning for their customers. The customer might leave the Directive to gather information from joint accounts at the bank with the customer’s spouse before returning to the Directive to continue the planning, opening, and funding of an Individual Retirement Account (IRA).

Goals

Goals let virtual assistants collect a list of information from customers through prompt and response interactions to help perform and personalize their activities. Virtual assistants store the elements of the list of information that the customer provides within virtual assistant’s session data for use anytime within a customer/virtual assistant session. Alme can also use its integration facilities to store elements of the list persistently in external apps.

Goals have the ability to respond to customers dynamically, based on the information the Goal has collected. For example, if the customer provides all of the Goal’s information in one interaction, then Goal is complete or fulfilled and the Alme virtual assistant can perform the activity that is driven by the information. However, if the customer provides, say, two of four required information items, then the Goal can change its responses and request the missing information, leading the customer through a conversation. Goals are created by authors or analysts who specify a list of variables to store the information to be collected and the actions to be taken when customers do not provide all the information in the list. In addition, Goals can be nested, improving their power and giving them flexibility as well as promoting their reuse.

Healthcare providers (Healthcare is one of Next IT’s target markets.) might use Goals to collect a list of information from patients prior to a first appointment. Retailers might use them to collect a set of preferences for a personal e-shopper virtual assistant.

Rules-Based Personalization

Personalization is essential for any application supporting customers’ normal activities. Why? Because personalization is the use of customer information—profile attributes, demographics, preferences, shopping histories, order histories, service contracts, and account data—to tailor a customer experience for individual customers. Performing activities on customers’ behalves requires some level of personalization.

For example, virtual assistants use a customer’s login credentials to access external apps that manage account or order data and, then, use that order data to help customers process a refund or a return. Or, to complete an auto insurance application, virtual assistants need profile data and demographic data to price a policy.

Alme’s rules-based personalization facilities are Variables, Response Conditions, and AppCalls. They are implemented within the knowledgebase items that contain the responses to customers’ requests.

  • Variables provide personalization and context. They contain profile data, external application data, and session data, for example.
  • Response Conditions are expressions (rules) on Variables. Response Conditions select responses and/or set data values of their Variables.
  • AppCalls (Application Calls) pass parameters to and execute external applications. They use Alme’s integration facilities to access external apps through JavaScript and Web Services APIs. For example, Jenn, Alaska Airlines’ virtual assistant, uses AppCalls to process information extracted from the customer’s question—departure city, arrival city, departure date and return date—and normalizes and formats the information for correct handling by the airlines’ booking engine. This AppCall checks city pairs to ensure the flight is valid and formats and normalizes dates so that the booking engine can display appropriate choices. AppCalls also integrate Alme with backend systems. Ann, Aetna’s virtual assistant, uses AppCalls to collect more than 80 profile variables from Aetna’s backend systems to facilitate performing tasks and to personalize answers for Aetna’s customers after they log in and launch Ann. (See the screen shot of Ann, below.)

Integration with External Applications

The resources that virtual assistant applications “own” are typically a knowledgebase (of answers and solutions to expected customers’ questions and problems) and accounts on Facebook and Twitter to enable members of these social networks to ask questions and report problems. So, to perform normal activities, virtual assistants need to integrate with the external apps that own the data and services that support those activities.

Alme integrates with external customer service applications through JavaScript (front end) and Web Services (back end) interfaces. New in Alme 2,2, the current Alme version, Next IT has introduced a re-architected Alme platform that is more modular and more extensible. The new platform has published JavaScript and Web Services interfaces to all Alme functionality and support for JavaScript and Web Services to external resources.

AppCalls use Alme’s integration facilities. To process an AppCall successfully, developers must have established a connection between Alme and an external application. Jenn integrates Alme with Alaska Airlines booking engine. Ann integrates Alme with Aetna’s backend systems. Here’s a screen shot.

aetna blog

Virtual Assistants Are Doing More of the Work of Live Agents

Next IT Alme was one of the first virtual assistant software products with the capabilities to perform normal activities. Its facilities are powerful and flexible. While integration with external applications will always require programming (and Next IT has simplified that programming), Alme’s facilities for supporting normal activities are built-in and designed for business analysts. They’re reasonably easy to learn, easy to use, and easy to manage.

By performing normal activities, virtual assistants are doing more of the work that live agents have been doing—quickly, accurately, consistently, and at a lower cost than live agents. That frees live agents to handle the stickiest, most complex customer requests, requests to perform normal activities and requests to answer questions and resolve problems. It’s also a driver for your organization to consider adding virtual assistants to your customer service customer experience portfolio.