New Evaluation Framework

We used a new Evaluation Framework for our latest Product Evaluation Report, which is about Salesforce Service Cloud. We introduced the new Framework to make our reports shorter and more easily actionable. Shorter for sure, our previous report on Service Cloud was 57 pages including illustrations. This one is 22 pages including illustrations, shorter by more than 60 percent!

We don’t yet know whether the Report is more easily actionable. It was just published. But, our approach to its writing was to minimize descriptions and to bring to the front our most salient analyses, conclusions, and recommendations.

Why?

Our Product Evaluation Reports had become increasingly valuable but to fewer readers. Business analysts facing a product selection decision, analysts for bankers and venture capitalists considering an investment decision, and suppliers’ competitive intelligence staff keeping up with the industry have always appreciated the reports, especially their depth and detail.

However, suppliers, whose products were the subjects of the reports, complained about their length and depth. Requests for more time to review the reports have become the norm, extending our publishing cycle. Then, when we finally get their responses, we’d see heavy commenting at the beginning of the reports but light commenting and no commenting at the end, as if they lost interest. Our editors have made the same complaints.

More significantly, readership, actually reading in general, is way down. Fewer people read…anything. These days, people want information in very small bites. Getting personal, for example, I loved Ron Chernow’s 800-page Hamilton, but I have spoken to so many who told me that it was too long. They couldn’t get through it and put it down unfinished, or, more typically, they wouldn’t even start it. I’m by no means comparing my Product Evaluation Reports to this masterpiece about American history. I’m just trying to emphasize the point.

Shorter Reports, No Less Research

While the Product Evaluation Report on Salesforce Service Cloud was 60 percent shorter, our research to write it was the same as our research for those previous, much longer Product Evaluation Reports. Our approach to research still has these elements, listed in order of increasing importance:

  • Supplier presentations and demonstrations
  • Supplier web content: web site, user and developer communities
  • Supplier SEC filings, especially Forms 10Q and 10K
  • Patent documentation, if appropriate
  • Product documentation, the manuals for administrators, users, and developers
  • Product trial

Product documentation and product trial are the most important research elements and we spend most of our research time in these two areas. Product documentation, the “manuals” for administrators, users, and developers provides complete, actual, accurate, and spin-less descriptions of how to setup and configure a product, of what a product does—its services and data, and of how it works. Product trials give us the opportunity to put our hands on a product and try it out for customer service tasks.

What’s In?

The new Framework has these four top-level evaluation criteria:

  • Customer Service Apps list and identify the key capabilities of the apps included in or, via features and/or add-ons, added to a customer service software product.
  • Channels, Devices, Languages list supported assisted-service and self-service channels, devices attachable to those channels, and languages that agents and customers may use to access the customer service apps on those devices.
  • Reporting examines the facilities to measure and present information about a product’s usage, performance, effectiveness, and efficiency. Analysts use this information continually to refine their customer service product deployments.
  • Product, Supplier, Offer. Product examines the history, release cycle, development plans, and customer base for a customer service product. They’re the factors that determine product viability. Supplier examines the factors that determine the supplier’s viability. Offer examines the supplier’s markets for the product and the product’s packaging and pricing.

This is the information that we use to evaluate a customer service product.

What’s Missing?

Technology descriptions and their finely granular analyses are out. For example, the new reports do not include tables listing and describing the attributes/fields of the data models for key customer service objects/records like cases and knowledge items or listing and describing the services that products provide for operating on those data models to perform customer service tasks. The new reports do not present analyses of individual data model attributes or individual services, either. Rather, the reports present a coarsely granular analysis of data models and services with a focus on strengths, limitations, and differentiators. We explain why data models might be rich and flexible or we identify important, missing types, attributes, and relationships then summarize the details that support our analysis.

“Customer Service Technologies” comprised more than half the evaluation criteria of the previous Framework and two thirds of the content of our previous Framework-based reports. These criteria described and analyzed case management, knowledge management, findability, integration, and reporting and analysis. For example, within case management, we examined case model, case management service, case sources, and case management tools. They’re out in the new version and they’re the reason the reports are shorter. But, they’re they basis of our analysis of the Customer Service Apps criterion. If a product has a rich case model and a large set of case management services, then rich case model and large set of case management services will be listed among the case management apps key capabilities in our Customer Services Apps Table and we’ll explain why we listed them in the analysis following the Table. On the other hand, if a product’s case model is limited, then case model will be absent from the Table’s list of key capabilities and we’ll call out the limitations in our analysis. Just a reminder, our bases for the evaluation of the Customer Service Apps criteria, the subcriteria of Technologies for the old Framework are shown in the Table below:

Slide1Table 1. We present the bases for the evaluation of the Customer Service App criteria in this Table.

Trustworthy Analysis

We had always felt that we had to demonstrate that we understood a technology to justify our analysis of that technology. We had also felt that you wanted and needed our analysis of all of that technology at the detailed level of every individual data attribute and service. You have taught us that you’d prefer higher-level analyses and low-level detail only to understand the most salient strengths, limitations, and differentiators.

The lesson that we’ve learned from you can be found in a new generation of Product Evaluation Reports. Take a look at our latest Report, our evaluation of Salesforce Service Cloud and let us know if we’ve truly learned that lesson.

Remember, though, if you need more detail, then ask us for it. We’ve done the research.

Advertisements

Virtual Assistant Update

 

We recently published “Virtual Assistant Update.” It’s a broad and not too deep update on virtual assistant technologies, products, suppliers, and markets from the perspective of the five leading suppliers: [24]7, Creative Virtual, IBM, Next IT, and Nuance. These are the leaders because they:

  • Have been in the virtual assistant business for some time (from 16 years for [24]7 via its acquisition of IntelliResponse to four years for IBM).
  • Have attractive and useful virtual assistant technology
  • Offer virtual assistant products that are widely used and well proven.
  • Want to be in the virtual assistant business and have company plans and product plans to continue.

The five suppliers are quite diverse. There’s the public $80 billion IBM and the public $2 billion Nuance. Then there are the private [24]7, a venture backed company big on acquisitions and the more closely held Creative Virtual and Next IT. Despite these big corporate-level differences, the five’s virtual assistant businesses are quite similar. Roughly they’re all about same size and the five compete as equals to acquire and retain virtual assistant business.

By the way, across the past 12 to 24 months, business has been good for all of the five suppliers. Customer growth has been very good across the board. Our suppliers have expanded into new markets and have introduced new and/or improved products.

Natural Language Processing and Machine Learning

Technologies are quite similar, too. All five have built their virtual assistant offerings with the same core technologies: Natural Language Processing (NLP) and machine learning.

Virtual Assistants use NLP to recognize intents of customer requests. NLP implementations usually comprise an engine that processes customer requests using an assortment of algorithms to parse and understand the words and phrases in a customer’s request. An NLP engine’s processing is guided by customizable and/or configurable deployment-specific mechanisms such as language models, grammars, and rules. These mechanisms accommodate the vocabularies of a deployment’s business, products, and customers.

Virtual assistants use machine learning technology to match actual customer requests with anticipated customer requests and then to select the content or execute the logic associated with the anticipated requests. (Machine learning algorithms learn from and then make predictions on data. Algorithms learn from training. Analysts/scientists train them with sample, example, or typical deployment-specific input then with feedback or supervision on correct and incorrect predictions. A trained algorithm is a deployment-specific machine learning model. The accuracy of models can improve with additional and continuing training. Some machine learning implementations are self-learning.)

Complex and Sophisticated Work: Consultant-led or Consultant-assisted

The work to adapt NLP and machine learning technology implementations for virtual assistant deployments is sophisticated and complex. This is work for experts: scientists, analysts, and developers in languages, data, and algorithms. The approach to this is work differentiates virtual assistant suppliers and products. The approach drives virtual assistant product selection. Here’s what we mean.

All the virtual assistant suppliers have built tools and package predefined resources to make the work simpler, faster, and more consistent. Some suppliers have built tools for the experts and these suppliers have also built consulting organizations with the expertise to use their tools. Successful deployments of their virtual assistant offerings are consultant-led. They require the services of the suppliers’ (or the suppliers’ partners’) consulting organizations.

Some suppliers have built tools that further abstract the work and make it possible for analysts, business users, and IT developers to deploy. While these suppliers have also built consulting organization with expertise in virtual assistant technologies and in their tools, successful deployments of their virtual assistant offerings are consultant-assisted and may even approach self-service.

So, a key factor in the selection of a virtual assistant product is deployment approach: consultant-led or consultant-assisted. Creative Virtual, Next IT, and Nuance offer consultant-led virtual assistant deployments. [24]7 and IBM offer consultant-assisted deployments. For example, IBM Watson Virtual Agent includes tools that make it easy to deploy virtual assistants. In the Figure below, we show the workspace wherein analysts specify the virtual assistant’s response to the customer request to make a payment. Note that the possible responses leverage content, tool, and facilities packaged with the product.

ibm watson va illos

© 2017 IBM Corporation

Illustration 7. This Illustration shows the Watson Virtual Agent workspace for specifying responses from the bot/virtual assistant.

 

Which is the better approach? Consultant-assisted is our preference, but we’ve learned over our long years of research and consulting that deployment approach is a function of corporate, style, personality, and culture. Some businesses and organizations give consultants the responsibility for initial and ongoing technology deployments. Some businesses want to do it themselves. For virtual assistant software, corporate style could very well be a key factor in product selection.

 

 

 

 

Evaluating Customer Service Products

Framework-based, In-depth Product Evaluation Reports

We recently published our Product Evaluation Report on Desk.com, Salesforce’s customer service offering for small and mid-sized businesses. “Desk” is a very attractive offering with broad and deep capabilities. It earns good grades on our Customer Service Report Card, including Exceeds Requirements grades in Knowledge Management, Customer Service Integration, and Company Viability.

We’re confident that this report provides input and guidance to analysts in their efforts to evaluate, compare, and select those customer service products, and we know that it provides product assessment and product planning input for its product managers. Technology analysts and product managers are the primary audiences for our reports. We research and write to help exactly these roles. Like all of our Product Evaluation Reports about customer service products that include multiple apps—case management, knowledge management, web self-service, communities, and social customer service—it’s a big report, more than 60 pages.

Big is good. It’s their depth and detail that makes them so. Our research for them always includes studying a product’s licensed admin, user, and, when accessible, developer documentation, the manuals or online help files that come with a product. We read the patents or patent applications that are a product’s technology foundation. Whenever offered, we deploy and use the products. (We took the free 30-day trial of Desk.) We’ll watch suppliers’ demonstrations, but we rely on the actual product and its underlying technologies.

On the other hand, we’ve recently been hearing from some, especially product marketers when they’re charged to review report drafts (We never publish without the supplier’s review.), that the reports are too big. Okay. Point taken. Perhaps, tt is time to update our Product Evaluation Framework, the report outline, to produce shorter, more actionable reports, reports with no less depth and detail but reports with less descriptive content and more salient analytic content. It’s also time to tighten up our content.

Product Evaluation Reports Have Two Main Parts

Our Product Evaluation Reports have had two main parts: Customer Service Best Fit and Customer Service Technologies. Customer Service Best fit “presents information and analysis that classifies and describes customer service software products…speed(ing) evaluation and selection by presenting easy to evaluate characteristics that can quickly qualify an offering.” Customer Service Technologies examine the implementations of a product’s customer service applications and their foundation technologies as well as its integration and reporting and analysis capabilities. Here’s the reports’ depth and detail (and most of the content). Going forward, we’ll continue with this organization.

Streamlining Customer Service Best Fit

We will revamp and streamline Customer Best Fit, improving naming and emphasizing checklists. The section will now have this organization:

  • Applications, Channels, Devices, Languages
  • Packaging and Licensing
  • Supplier and Product
  • Best Prospects and Sample Customers
  • Competitors

Applications, Channels, Devices, Languages are lists of key product characteristics, characteristics that quickly qualify a product for deeper consideration. More specifically, applications are the sets of customer service capabilities “in the box” with the product—case management, knowledge management, and social customer service, for example. Channels are assisted-service, self-service, and social. We list apps within supported channels to show how what’s in the box may be deployed. Devices are the browsers and mobile devices the product supports for internal users and for end customers. Languages are two lists: one for the languages in which the product deploys and supports for its administration and internal users and one for the languages it supports for end customers.

Packaging and Licensing presents how the supplier offers the product, the fees that it charges for the offerings, and the consulting services available and/or necessary to help licensees deploy the offerings.

 Supplier and Product present high level assessments of the supplier’s and the product’s viability. For the supplier, we present history, ownership, staffing, financial performance, and customer growth. For the product, we present history, current development approach, release cycle, and future plans.

Best Prospects and Sample Customers are lists of the target markets for the product—the industries, business sizes, and geographies wherein the product best fits. This section also contains the current customer base for the product, a list of typical/sample customers within those target markets and, if possible, presents screen shots of their deployments.

 Competition lists the product’s closest competitors, its best alternatives. We’ll also include a bit of analysis explaining what make them the best alternatives and where the subject product has differentiators.

Tightening-up Customer Service Technologies

Customer Service Technologies is our key value-add and most significant differentiator of our Product Evaluation Reports. It’s why you should read our reports, but, as we mentioned, it’s also the main reason why they’re big.

We’ve spent years developing and refining the criteria of our Evaluation Framework. They criteria are the results of continuing work with customer service products and technologies and our complementary work the people who are product’s prospects, licensees, suppliers, and competitors. We’re confident that we evaluate the technologies of customer service products by the most important, relevant, and actionable criteria. Our approach creates common, supplier-independent and product-independent analyses. These analyses enable the evaluation and comparison of similar customer service products and results in faster and lower risk selection of a product that best fits a set of requirements.

However, we have noticed that the descriptive content that are the bases for our analyses has gotten a bit lengthy and repetitive (repeating information in Customer Best Fit). We plan to tighten up Customer Service Technologies content and analysis in these ways:

  • Tables
  • Focused Evaluation Criteria
  • Consistent Analysis
  • Reporting

Too much narrative and analysis has crept into Tables. We’ll make sure that Tables are bulleted lists with little narrative and no analysis.

Evaluation criteria have become too broad. We’ve been including detailed descriptions and analyses of related and supported resources along with resources that’s the focus of the evaluation. For example, when we describe and analyze the details of a case model, we’ll not also describe and analyze the details of user and customer models. Rather we’ll just describe the relationship between the resources.

Our analyses will have three sections. The first will summarize what’s best about a product. The second will present additional description and analysis where Table content needs further examination. The third will be “Room for Improvement,” areas where the product is limited. This approach will make the reports more actionable and more readable as well as shorter.

In reporting, we’ll stop examining instrumentation, the collection and logging of the data that serves as report input. The presence (or absence) of reports about the usage and performance of customer service resources is really what matters. So, we’ll call the criterion “Reporting” and we’ll list the predefined reports packaged with a product in a Table. We’ll discuss missing reports and issues in instrumentation in our analysis.

Going Forward

Our Product Evaluation Report about Microsoft Dynamics CRM Online Service will be the first to be written on the streamlined Framework. Expect it in the next several weeks. Its Customer Service Best Fit section really is smaller. Each of its Customer Service Technologies sections is smaller, too, more readable and more actionable as well.

Here’s the graphic of our Product Evaluation Framework, reflecting the changes that we’ve described in this post.

Slide1

Please let us know if these changes make sense to you and please let us know if the new versions of the Product Evaluation Reports that leverage them really are more readable and more actionable.

Who You Gonna Call?

Apologies to Ray Parker Jr. While your question or a problem may not be about ridding your neighborhood of ghosts, “Who You Gonna Call” to get the answer or solution that you need?

Getting help on the Internet or on your mobile device is easy—type it into the search box of your favorite Internet search engine or ask Siri, now Alexa and Cortana, too, but it’s not always easy to get an answer or a solution to complex, detailed, or involved questions and problems. Who You Gonna Call with those?

Questions and Problems

Over the past several months, your blogger has had quite few questions and problems for which answers and solutions were not so easy to find. Here are some of them:

  • My Whirlpool electric dryer doesn’t heat (or maybe it overheats before it doesn’t heat).
  • My Toro gasoline powered lawnmower is hard to start and stalls when it does start.
  • My new iPhone 6s doesn’t pair with the Bluetooth audio system in my car.
  • Which should I buy: an electric induction cooktop, a standard electric cooktop, or a natural gas cooktop?

DIY Answers and Solutions

Getting answers and solutions to these question and problems involves getting your hands dirty, literally or figuratively. These questions and problems are about what things do, how things are put together/assembled, and the way that things work. I want the inside information that I can use to get to explain the answers and apply the fixes myself. I’m a DIY (Do It Yourself) kind of person, a DIYer. I’m willing and eager and I have tools. I enjoy the challenge and I revel in the satisfaction of getting the answers or fixing the problems myself. I’m not looking for a pro to do the work for me for a fee.

So who was your blogger gonna call to get answers and fixes to the list of questions and problems? Let’s take a look at these possibilities:

  • Social networks
  • Communities and forums
  • YouTube
  • Brand sites
  • Build and repair sites

Social Networks

Crowd sourcing answers and fixes from the members of might not work for these kinds of questions and problems. While many of my friends and followers are DIY kind of people, too, the most I expect from a crowd-sourced approach is a reference to a web site or to an expert. Very helpful to be sure, but a step removed from what I need.

Communities and Forums

Communities and forums let members post questions and problems within topics in the hopes that other community members will reply with comments that contain answers and solutions. There are two types of communities and forums. Communities of the first type are hosted and moderated by the brand about which customers ask questions or pose problems and receive answers and solutions from other customer as well as from subject matter experts (SMEs) who may also be customers or may be on the brand’s customer service staff. These communities can be very helpful, especially so when the brand’s employees monitor and moderate customers’ questions and problems. Brand employee participation ensures correct answers and solutions. They’re not so helpful when their answers and solutions lack detail or when their topics do not include the subjects of questions and problems. We’ve seen communities for ISVs that seem only to suggest consulting services as answers and solutions. We’ve seen communities with topics only about making suggestions for product or service improvements or only about customer experience with a brand.

The second type of community or forum is hosted and managed independently of the brand that is the subject of its topics. Posts on these communities commonly contain complex, detailed, technical questions and problems. Comments frequently contain exactly the answers and solutions in the level of detail that DIYers crave. On the other hand, many of these communities have no moderation or monitoring by SMEs. They exercise no control over comments. For example, below is a post from acuraworld.com that accurately represents my question about Bluetooth pairing a new iPhone. The comment contains an unmoderated and unappealing answer.

iphone acura bluetooth

© 2016 Acuraworld

Perhaps this answer does solve the problem, but I would never “Reset All Settings” on my iPhone to solve it. A better answer lists the steps to establish a new pairing in the car, a pain for sure because voice tags are phone-specific in my car’s system. Be careful with communities and forums.

YouTube

YouTube has a huge library of DIY videos. Find the video that answers your question or solves your problem by searching within the site. YouTube’s videos are posted by brands, by repair pros, and DIYers. YouTube does not monitor or moderate their content. So, DIYer beware. Be careful of whose advice you take.

A YouTube DIYer video, https://www.youtube.com/watch?v=0Ni-rdRyxA0, contained the fix to my starting/stalling lawnmower problem. I found it after searching independent communities for the problem symptom and learning that my problem was somewhere in the lawnmower’s fuel system, likely the carburetor. Note that Toro.com, the brand site for my lawnmower, was similar to Whirpool.com, offering downloads of product manuals.

Brand Sites

Brand web sites of brands may contain the level of information that answers detailed questions or that fixes problems with their products. For my dryer problem, I went to whirlpool.com, clicked the Owners tab, and clicked the Support tab to get to this site:

whirlpool support

© 2016 Whirlpool

I followed the Manuals tab/Find Manuals link then enter the Model number. For my model, Whirlpool provides three downloads:

  • Owners Manual
  • Installation Instructions
  • Parts List

The Owners Manual is a “Use and Care Guide.” Its content is not model-specific or even specific to dryer type—electric or gas. It does contain an If You Need Assistance or Service section that provides some high-level troubleshooting information as well as telephone numbers and mailing addresses (It’s an old dryer.). The Parts List contains numbered schematics and corresponding lists of parts numbers and brief description or names of every single part of the dryer. This information is essential to every fix because every fix usually requires replacement of broken parts and parts numbers are the mechanism for their identification. The Parts List manual also provides an idea of how the dryer is assembled and of how it works. The heating element, thermostats, and fuses are the causes and effects of not heating and overheating. These are parts are numbers 6, 7, 8, 15, and 17 on the schematic for Bulkhead Parts shown below.

whirlpool parts

© 2016 Whirlpool

Looking at the schematic, it’s difficult to visualize an assembled dryer and the locations of and accesses to the heating element, thermostats, and fuses. Mechanical/electrical aptitude and actual repair experience are required for that. You’ll have them after a single repair, but don’t call a pro yet. More online help is available.

Repair and Parts Sites

Repair and parts sites are exactly that online help. My fav is repairclinic.com. Go there, enter your model number and you’ll see an extremely helpful page like this:

repair clinic 2

© 2016 RepairClinic.com, Inc

In addition to a list of parts with pictures and descriptions, Repair Clinic also provides a list of Common Problems on the left of the page. Click “Dryer overheating” to reach this page:

repairclininc

© 2016 RepairClinic.com, Inc

Now the fix is very close. This page contains everything you’ll need to understand how dryers work, how/why they break, diagnose and verify the problem, identify the part causing the problem, and order the part to fix the problem. The ordered list of likely causes with descriptions and videos is especially helpful. I love this site. It contains similar information for lawn equipment, heating and cooling, and power tools as well as appliances. But, repairclinic.com is not the only site that provides diagnostics and parts for fixing these types of problems.

SearsPartsDirect.com contains information similar to RepairClinic.com and not just for Sears’ products. ThisOldHouse.com the web site for the long-running PBS series contains a wealth of answers and solutions to a wide range of home improvement and repair questions, problems, and projects. Answers and solutions are easy to understand videos presented by the show’s experts. The video library is continually growing.

Cooktops

The last item in my list is a product research question about cooktops and a tax question about annuities. Regarding cooktops, I was ready to replace my 30 something year old electric cooktop with a gas cooktop. My product research started, as it usually does on ConsumerReports.org. It’s a subscription site. I’ve been a subscriber and a member for many years. First, I looked at the Buying Guide for cooktops where I learned about electric induction cooktops. The description and analysis changed my mind about gas. Then I went to product ratings of electric induction cooktop products. Consumer Reports rated GE Profile products highly and my wife and I have been very happy with the other GE Profile appliances in our kitchen. That’s what we bought. Of course, I installed it.

Recommendations

The Internet is a wonderful resource for getting DIY answers and solutions. The challenge for DIYers will be identifying the correct and most usable answers and solutions from a myriad of reasonable possibilities. Who You Gonna Call? Generally, we recommend:

  • Brand sites
  • Moderated and monitored communities
  • Build and repair sites
  • YouTube

More specifically, RepairClininc.com and, especially, ConsumerReports.org are our favorites. Your subscription and membership fees to Consumer Reports will be paid back many times over with the best product research.

 

 

The Helpdesks: Desk.com, Freshdesk, Zendesk

We’ve added our Product Evaluation Report on Freshdesk to our library of in-depth, framework-based reports on customer service software. We put this report on the shelf, so to speak, next to our Product Evaluation Reports on Desk.com and Zendesk. The three products are quite a set. They’re similar in many ways, remarkably so. Here are a few of those similarities:

The products are “helpdesks,” apps designed to provide an organization’s customers (or users) with information and support about the organization’s products and services. Hence, their names are (alphabetically) Desk.com, Freshdesk, and Zendesk.

They have the same sets of customer service apps and those apps have very similar capabilities: case management, knowledge management and community/forum with a self-service web portal and search, social customer service supporting Facebook and Twitter, chat, and telephone/contact center. Case management is the core app and a key strength for all of the products. Each has business rules-based facilities to automate case management tasks. On the other hand, knowledge management and search are pretty basic in all of them.

The three also include reporting capabilities and facilities for integrating external apps. Reporting has limitations in all three. Integration is excellent across the board.

These are products that deploy in the cloud. They support the same browsers and all three also have native apps for Android and iOS devices.

All three are packaged and priced in tiers/levels/editions of functionality. Their licensing is by subscription with monthly, per user license fees.

Simple, easy to learn and easy to use, and cross/multi/omni-channel are the ways that the suppliers position these offerings. Our evaluations were based on trial deployments for each of the three products. We found that all of them support these positioning elements very well.

Small (very small, too) and mid-sized businesses across industries in all geographies are their best fits, although the suppliers would like to move up market. The three products have very large customer bases—somewhere around 30,000 accounts for Desk.com and Zendesk and more than 50,000 accounts for Freshdesk per a claim in August from Freshdesk’s CEO. Note that Desk.com was introduced in 2010, Freshdesk in 2011, and Zendesk in 2004.

Suppliers’ internal development organizations design, build, and maintain the products. All three suppliers have used acquisitions to extend and improve product capabilities.

While the products are similar, the three suppliers are quite different. Salesforce.com, offers Desk.com. Salesforce is a publicly held, San Francisco, CA based, $8 billion corporation founded in 1999. Salesforce has multiple product lines. Freshdesk Inc., offers Freshdesk. It’s a privately held corporation founded in 2010 and based in Chennai, India. Zendesk, Inc. offers Zendesk. This company was founded in 2007 in Denmark and reincorporated in the US in 2009. It’s publicly held and based in San Francisco, CA. Revenues in 2015 were more than $200 million.

These differences—public vs. private, young vs. old(er), large vs. small(er), single product line vs. multiple product line—will certainly influence many selection decisions. However, all three are viable suppliers and all three are leaders in customer service software. The supplier risk in selecting Desk.com, Freshdesk, or Zendesk is small.

Then, where are the differences that result in making a selection decision? The differences are in the ways that the products’ developers have implemented the customer service applications. The differences become clear from actually using the products. Having actually used all three products in our research, we’ve learned the differences and we’ve documented them in our Product Evaluation Reports. Read them to understand the differences and to understand how those differences match your requirements. There’s no best among Desk.com, Freshdesk, and Zendesk but one of them will be best for you.

For example, here’s the summary of Freshdesk evaluation, the grades that the product earned on our Customer Service Report Card. “Freshdesk earns a mixed Report Card—Exceeds Requirements grades in Capabilities, Product Management, Case Management, and Customer Service Integration, Meets Requirements grades in Product Marketing, Supplier Viability, and Social Customer Service, but Needs Improvement grades in Knowledge Management, Findability, and Reporting and Analysis.”

Case Management is where Freshdesk has its most significant differences, differences from its large set of case management services and facilities, its support for case management teams, its automation of case management tasks, and its easy to learn, easy to use case management tools. For example, Arcade is one of Freshdesk’s facilities for supporting case management teams. Arcade is a collection of these three, optional gamification facilities that sets and tracks goals for agents’ customer service activities.

  • Agents earn Points for resolving Tickets in a fast and timely manner and lose points for being late and for having dissatisfied customers, accumulating points toward reaching six predefined skill levels.
  • Arcade lets agents earn “trophies” for monthly Ticket management performance. In addition,
  • Arcade awards bonus points for achieving customer service “Quests” such as forum participation or publishing knowledgebase Solutions.

Arcade lets administrators configure Arcade’s points and skill levels. Its Trophies and Quests have predefined goals; however, administrators can set Quests on or off. The Illustration below shows the workspace that administrators use to configure Points.

arcade points

Freshdesk can be a Customer Service Best Fit for many small and mid-sized organizations. Is it a Best Fit for your? Read our Report to understand why and how.

Nuance Nina Virtual Assistants

We evaluated Nina, the virtual assistant offering from Nuance, for the third time, publishing our Product Evaluation Report on October 29, 2015. This Report covers both Nina Mobile and Nina Web.

Briefly, by way of background, Nina Mobile provides virtual assisted-service on mobile devices. Customers ask questions or request actions of Nina Mobile’s virtual assistants questions by speaking or typing them. Nina Mobile’s virtual assistants deliver answers in text. Nina Mobile was introduced in 2012. We estimate that approximately 15 Nina Mobile-based virtual assistants have been deployed in customer accounts.

Nina Web provides virtual assisted-service through web browsers on PCs and on mobile devices. Customers ask questions or requests actions of Nina Web’s virtual assistants questions by typing them into text boxes. Nina Web’s virtual assistants deliver answers or perform actions in text and/or in speech. Nina Web was introduced as VirtuOz Intelligent Virtual Agent in 2004. Nuance acquired VirtuOz in 2013. We estimate that approximately 35 Nina Web-based virtual assistants have been deployed in customer accounts.

The two products now have common technologies, tools, and a development and deployment platform. That’s a big deal. They had been separate and pretty much independent products, sharing little more than a brand. Nuance’s development team has been busy and productive. Nina also has many new and improved capabilities. Most significant are a new and additional toolset that supports key tasks in initial deployment and ongoing management, PCI (Payment Card Industry) certification, which means that Nina virtual assistants can perform ecommerce tasks for customers, support for additional languages, and packaged integrations with chat applications.

Nina Evaluation Process

We did not include an evaluation of Nina’s Ease of Evaluation. Our work on the Nina Product Evaluation Report was well underway before we added that criterion to our framework. So, we’ll offer that evaluation here.

For our evaluation, we used:

  • Product documentation, which was provided to us by Nuance under an NDA
  • Demonstrations, especially of new tools and functionality, conducted by Nuance product management staff
  • Web content of nuance.com
  • Online content of Nina deployments
  • Nuance’s SEC filings
  • Discussions with Nuance product management and product marketing staff
  • Thorough (and very much appreciated) review of report draft

We also leveraged our knowledge of Nina, knowledge that we acquired in our research for two previously published Product Evaluation Reports from July 2012 and January 2014. We know the product, the underlying technology, and the supplier. So we were able to focus our research on what was new and improved.

Product Documentation

Product documentation, the end user/admin manuals for Nina IQ Studio (NIQS) and the new Nuance Experience Studio (NES) toolsets, was they key source for our research. We found the manuals to be well written and reasonably easy to understand. Samples and examples illustrated simple use cases and supported descriptions very well. Showing more complex use cases, especially for customer/virtual assistant dialogs, would have been very helpful. Personalization facilities could be explained more thoroughly. Also, there’s a bit of inconsistency in terminology between the two toolsets and their documentation.

Nina Deployments

Online content of Nina deployments helped our research significantly. Within the report, we showed two examples of businesses that have licensed and deployed Nina Web are up2drive.com, the online auto loan site for BMW Financial Services NA, LLC and the Swedish language site for Swedbank, Sweden’s largest savings bank. The up2drive Assist box accesses the site’s Nina Web virtual assistant. We asked, “How to I qualify for the lowest rate new car rate?” See the Illustration just below.

up2drive

Online content of Nina Mobile deployments show how virtual assistants can perform actions for customers. For example, we showed how Dom, the Nina Mobile virtual assistant, could help you order pizza from Domino’s in our blog post of May 14, 2015. See https://www.youtube.com/watch?v=noVzvBG0GD0.

Take care when using virtual assistant deployments for evaluation and selection. They’re only as good as the deploying organization wants to make them. Their limitations are almost never the limitations of the virtual assistant software. Every virtual assistant software product that we’ve evaluated has the facilities to implement and deliver excellent customer service experience. Virtual assistant deployments, like all customer experience deployments, are limited by the deploying organization’s investment in them. The level of investment controls which questions they can answer, which actions they can perform, how well they can deal with vague or ambiguous questions and action requests, and their support for dialogs/conversations, personalization, and transactions.

No Trial/Test Drive

Note that Nuance did not provide us with a product trial/test drive of Nina. In fact, Nuance does not offer Nina trials/test drives to anyone. That’s typical of and common for virtual assistant software. Suppliers want easy and fast self-service trials that lead prospects to license their offerings. Virtual assistant software trials are not any of these things. They’re not designed for self-service deployment either for free or for fee.

Why not? Because virtual assistant software is complex. Even its simplest deployment requires building a knowledgebase of the answers to the typical and expected questions that customers ask, using virtual assistant facilities to deal with vague and ambiguous questions, engaging in a dialog/conversation, escalating to chat, or presenting a “no results found” message, for example, and using virtual assistant facilities to perform actions that customers request and deciding how to perform them. (Performing actions will likely require integration apps external to virtual assistant apps.) This is not the stuff of self-service trials and test-drives.

In addition, most virtual assistant suppliers have not yet invested in building tools that speed and simplify the work that organizations must perform for the initial deployment and ongoing management of virtual assistants software even after it has been licensed. Rather, suppliers offer their consulting services instead. (That’s changing for Nuance with toolsets like NES and for several other virtual assistant software suppliers and that’s certainly a topic for a later time.)

Thank You Very Much, Nuance

One more point about Ease of Evaluation. Our research goes into the details of customer service software. We publish in-depth Product Evaluation Reports. We demand a significant commitment from suppliers to support our work. Nuance certainly made that commitment and made Nina Easy to Evaluate for us. We so appreciate Nuance’s support and the time and effort taken by its staff.

Nina was very easy for us to evaluate. The product earns a grade of Exceeds Requirements in Ease of Evaluation.

Zendesk, Customer Service Software That’s Easy to Evaluate

Zendesk Product Evaluation

Zendesk is the customer service offering from Zendesk, Inc. a publicly held, San Francisco, CA based software supplier with 1,000 employees that was founded in 2004. The product provides cloud-based, cross-channel case management, knowledge management, communities and collaboration, and social customer service capabilities across assisted-service, self-service, and social customer service channels.

We evaluated Zendesk against our Evaluation Framework for Customer Service and published our Product Evaluation Report on October 22. Zendesk earned a very good Report Card—Exceeds Requirements grades in Product History and Strategy, Case Management, and Customer Service Integration, and Meets Requirements grades for all other criteria but one, Social Customer Service. Its Needs Improvement grade in Social Customer Service is less an issue with packaged capabilities than it is a requirement for a specialized external app designed for and positioned for wide and deep monitoring of social networks.

Evaluation Framework

Our Evaluation Framework considers an offering’s functionality and implementation, what a product does and how it does it. It also considers the supplier and the supplier’s product marketing (positioning, target markets, packaging and pricing, competition) and product management (release history and cycle, development approach, strategy and plans) for the offering.

We rely on the supplier for product marketing and product management information. First we gather that info from the supplier’s website and press releases and, if the supplier is publicly held, from the supplier’s SEC filings. We speak directly with the supplier for anything else in these areas.

For functionality and implementation, the supplier typically gives us (frequently under NDA) access to the product’s user and developer documentation, the manuals and help files that licensees get. In this era of cloud computing, we’ve been more and more frequently getting access to the product, itself, through online trials. We also read any supplier’s patents and patent applications to learn about the technology foundation of functionality and implementation.

In addition, we entertain the supplier’s presentations and demonstrations. They’re useful to get a feel for the style of the product and the supplier and to understand future capabilities. However, to really understand the product, there’s no substitute for actual usage (where we drive) and/or documentation.

Our research process includes insisting that the supplier reviews and provides feedback on a draft of the Product Evaluation Report. This review process ensures that we respect any NDA, improves the accuracy and usefulness of the information in the report, and prevents embarrassing the supplier and us.

Ease of Evaluation, a New Evaluation Criterion

Our frameworks have never had an Ease of Evaluation criterion. We’ve always figured that we’d do the work to make your evaluation and selection of products easier, faster, and less costly. Our evaluation of Zendesk has us rethinking that. We’ve learned that our Product Evaluation Reports can speed and shorten your evaluation and selection process but that your process doesn’t end with our reports. You do additional evaluation, modifying and extending our criteria or adding criteria for criteria to represent requirements specific to your organization, your business, and/or application for a product. Understanding Ease of Evaluation can further speed and shorten your evaluation and selection process.

So, beginning with our next Product Evaluation Report, you’ll find that Ease of Evaluation criterion in our framework.

Zendesk Was Very Easy to Evaluate

By the way, Zendesk would earn an Exceeds Requirements grade for Ease of Evaluation. We did a 30-day trial of the product. We signed-up for the trial online—no waiting. During the trial we submitted cases to Zendesk Support and we used the Zendesk community forums. In addition, Zendesk.com provided a wealth of detailed information about the product, including technical specifications and a published RESTful API.

Scroll down to the bottom of Zendesk.com’s home page to see a list of UNDER THE HOOD links.

under the hood

Looking at the UNDER THE HOOD links in a bit more detail:

  • Apps and integrations is a link to a marketplace for third party apps. Currently there are more than 300 of them.
  • Developer API is a link to the documentation of Zendesk’s RESTful, JavaScript API. It lists and comprehensively describes more than100 services.
  • Mobile SDK is a link to documentation for Android and iOS SDKs and for the Web Widget API. (The Web Widget embeds Zendesk functionality such as ticketing and knowledgebase search in a website.)
  • Security is a link to descriptions of security-related features descriptions lists of Zendesk’s security compliance certifications and memberships.
  • Tech Specs is a link to a comprehensive collection of documents that describe Zendesk’s functionality and implementation.
  • What’s new is a link to high-level descriptions of recently added capabilities
  • Uptime is a link to info and charts about the availability of Zendesk Inc.’s cloud computing infrastructure
  • Legal is a link to a description of the Terms of Service of the Zendesk offering

We spent considerable time in Tech Specs and Developer API. We found the content to be comprehensive, well organized and easy to access, and well written. The combination of the product trial and UNDER THE HOOD made Zendesk easy to evaluate. And, we did not have to sign an NDA for access to any of this information.

Many suppliers make their offerings as easy to evaluate as Zendesk, Inc. made Zendesk for us. On the other hand, many suppliers are not quite so willing to share detailed information about their products and, especially their underlying technologies. Products and technologies are, after all, software suppliers’ key IP. They have every right to protect this information. They don’t feel that patent protection is enough. Their offerings are much harder to evaluate at the level of our Product Evaluation Reports.

Consider Products That Are Easy to Evaluate

We feel as you should feel that in-depth evaluations are essential to the selection of customer service products. You’ll be spending very significant time and money to deploy and maintain these products. You should never rely on supplier presentations and demonstrations to justify those expenditures. Certainly rely on our reports and use them as the basis for your further, deeper evaluation, including our new Ease of Evaluation criterion. Put those suppliers that facilitate these evaluations on your short lists.