New Evaluation Framework

We used a new Evaluation Framework for our latest Product Evaluation Report, which is about Salesforce Service Cloud. We introduced the new Framework to make our reports shorter and more easily actionable. Shorter for sure, our previous report on Service Cloud was 57 pages including illustrations. This one is 22 pages including illustrations, shorter by more than 60 percent!

We don’t yet know whether the Report is more easily actionable. It was just published. But, our approach to its writing was to minimize descriptions and to bring to the front our most salient analyses, conclusions, and recommendations.


Our Product Evaluation Reports had become increasingly valuable but to fewer readers. Business analysts facing a product selection decision, analysts for bankers and venture capitalists considering an investment decision, and suppliers’ competitive intelligence staff keeping up with the industry have always appreciated the reports, especially their depth and detail.

However, suppliers, whose products were the subjects of the reports, complained about their length and depth. Requests for more time to review the reports have become the norm, extending our publishing cycle. Then, when we finally get their responses, we’d see heavy commenting at the beginning of the reports but light commenting and no commenting at the end, as if they lost interest. Our editors have made the same complaints.

More significantly, readership, actually reading in general, is way down. Fewer people read…anything. These days, people want information in very small bites. Getting personal, for example, I loved Ron Chernow’s 800-page Hamilton, but I have spoken to so many who told me that it was too long. They couldn’t get through it and put it down unfinished, or, more typically, they wouldn’t even start it. I’m by no means comparing my Product Evaluation Reports to this masterpiece about American history. I’m just trying to emphasize the point.

Shorter Reports, No Less Research

While the Product Evaluation Report on Salesforce Service Cloud was 60 percent shorter, our research to write it was the same as our research for those previous, much longer Product Evaluation Reports. Our approach to research still has these elements, listed in order of increasing importance:

  • Supplier presentations and demonstrations
  • Supplier web content: web site, user and developer communities
  • Supplier SEC filings, especially Forms 10Q and 10K
  • Patent documentation, if appropriate
  • Product documentation, the manuals for administrators, users, and developers
  • Product trial

Product documentation and product trial are the most important research elements and we spend most of our research time in these two areas. Product documentation, the “manuals” for administrators, users, and developers provides complete, actual, accurate, and spin-less descriptions of how to setup and configure a product, of what a product does—its services and data, and of how it works. Product trials give us the opportunity to put our hands on a product and try it out for customer service tasks.

What’s In?

The new Framework has these four top-level evaluation criteria:

  • Customer Service Apps list and identify the key capabilities of the apps included in or, via features and/or add-ons, added to a customer service software product.
  • Channels, Devices, Languages list supported assisted-service and self-service channels, devices attachable to those channels, and languages that agents and customers may use to access the customer service apps on those devices.
  • Reporting examines the facilities to measure and present information about a product’s usage, performance, effectiveness, and efficiency. Analysts use this information continually to refine their customer service product deployments.
  • Product, Supplier, Offer. Product examines the history, release cycle, development plans, and customer base for a customer service product. They’re the factors that determine product viability. Supplier examines the factors that determine the supplier’s viability. Offer examines the supplier’s markets for the product and the product’s packaging and pricing.

This is the information that we use to evaluate a customer service product.

What’s Missing?

Technology descriptions and their finely granular analyses are out. For example, the new reports do not include tables listing and describing the attributes/fields of the data models for key customer service objects/records like cases and knowledge items or listing and describing the services that products provide for operating on those data models to perform customer service tasks. The new reports do not present analyses of individual data model attributes or individual services, either. Rather, the reports present a coarsely granular analysis of data models and services with a focus on strengths, limitations, and differentiators. We explain why data models might be rich and flexible or we identify important, missing types, attributes, and relationships then summarize the details that support our analysis.

“Customer Service Technologies” comprised more than half the evaluation criteria of the previous Framework and two thirds of the content of our previous Framework-based reports. These criteria described and analyzed case management, knowledge management, findability, integration, and reporting and analysis. For example, within case management, we examined case model, case management service, case sources, and case management tools. They’re out in the new version and they’re the reason the reports are shorter. But, they’re they basis of our analysis of the Customer Service Apps criterion. If a product has a rich case model and a large set of case management services, then rich case model and large set of case management services will be listed among the case management apps key capabilities in our Customer Services Apps Table and we’ll explain why we listed them in the analysis following the Table. On the other hand, if a product’s case model is limited, then case model will be absent from the Table’s list of key capabilities and we’ll call out the limitations in our analysis. Just a reminder, our bases for the evaluation of the Customer Service Apps criteria, the subcriteria of Technologies for the old Framework are shown in the Table below:

Slide1Table 1. We present the bases for the evaluation of the Customer Service App criteria in this Table.

Trustworthy Analysis

We had always felt that we had to demonstrate that we understood a technology to justify our analysis of that technology. We had also felt that you wanted and needed our analysis of all of that technology at the detailed level of every individual data attribute and service. You have taught us that you’d prefer higher-level analyses and low-level detail only to understand the most salient strengths, limitations, and differentiators.

The lesson that we’ve learned from you can be found in a new generation of Product Evaluation Reports. Take a look at our latest Report, our evaluation of Salesforce Service Cloud and let us know if we’ve truly learned that lesson.

Remember, though, if you need more detail, then ask us for it. We’ve done the research.


Evaluating Customer Service Products

Framework-based, In-depth Product Evaluation Reports

We recently published our Product Evaluation Report on, Salesforce’s customer service offering for small and mid-sized businesses. “Desk” is a very attractive offering with broad and deep capabilities. It earns good grades on our Customer Service Report Card, including Exceeds Requirements grades in Knowledge Management, Customer Service Integration, and Company Viability.

We’re confident that this report provides input and guidance to analysts in their efforts to evaluate, compare, and select those customer service products, and we know that it provides product assessment and product planning input for its product managers. Technology analysts and product managers are the primary audiences for our reports. We research and write to help exactly these roles. Like all of our Product Evaluation Reports about customer service products that include multiple apps—case management, knowledge management, web self-service, communities, and social customer service—it’s a big report, more than 60 pages.

Big is good. It’s their depth and detail that makes them so. Our research for them always includes studying a product’s licensed admin, user, and, when accessible, developer documentation, the manuals or online help files that come with a product. We read the patents or patent applications that are a product’s technology foundation. Whenever offered, we deploy and use the products. (We took the free 30-day trial of Desk.) We’ll watch suppliers’ demonstrations, but we rely on the actual product and its underlying technologies.

On the other hand, we’ve recently been hearing from some, especially product marketers when they’re charged to review report drafts (We never publish without the supplier’s review.), that the reports are too big. Okay. Point taken. Perhaps, tt is time to update our Product Evaluation Framework, the report outline, to produce shorter, more actionable reports, reports with no less depth and detail but reports with less descriptive content and more salient analytic content. It’s also time to tighten up our content.

Product Evaluation Reports Have Two Main Parts

Our Product Evaluation Reports have had two main parts: Customer Service Best Fit and Customer Service Technologies. Customer Service Best fit “presents information and analysis that classifies and describes customer service software products…speed(ing) evaluation and selection by presenting easy to evaluate characteristics that can quickly qualify an offering.” Customer Service Technologies examine the implementations of a product’s customer service applications and their foundation technologies as well as its integration and reporting and analysis capabilities. Here’s the reports’ depth and detail (and most of the content). Going forward, we’ll continue with this organization.

Streamlining Customer Service Best Fit

We will revamp and streamline Customer Best Fit, improving naming and emphasizing checklists. The section will now have this organization:

  • Applications, Channels, Devices, Languages
  • Packaging and Licensing
  • Supplier and Product
  • Best Prospects and Sample Customers
  • Competitors

Applications, Channels, Devices, Languages are lists of key product characteristics, characteristics that quickly qualify a product for deeper consideration. More specifically, applications are the sets of customer service capabilities “in the box” with the product—case management, knowledge management, and social customer service, for example. Channels are assisted-service, self-service, and social. We list apps within supported channels to show how what’s in the box may be deployed. Devices are the browsers and mobile devices the product supports for internal users and for end customers. Languages are two lists: one for the languages in which the product deploys and supports for its administration and internal users and one for the languages it supports for end customers.

Packaging and Licensing presents how the supplier offers the product, the fees that it charges for the offerings, and the consulting services available and/or necessary to help licensees deploy the offerings.

 Supplier and Product present high level assessments of the supplier’s and the product’s viability. For the supplier, we present history, ownership, staffing, financial performance, and customer growth. For the product, we present history, current development approach, release cycle, and future plans.

Best Prospects and Sample Customers are lists of the target markets for the product—the industries, business sizes, and geographies wherein the product best fits. This section also contains the current customer base for the product, a list of typical/sample customers within those target markets and, if possible, presents screen shots of their deployments.

 Competition lists the product’s closest competitors, its best alternatives. We’ll also include a bit of analysis explaining what make them the best alternatives and where the subject product has differentiators.

Tightening-up Customer Service Technologies

Customer Service Technologies is our key value-add and most significant differentiator of our Product Evaluation Reports. It’s why you should read our reports, but, as we mentioned, it’s also the main reason why they’re big.

We’ve spent years developing and refining the criteria of our Evaluation Framework. They criteria are the results of continuing work with customer service products and technologies and our complementary work the people who are product’s prospects, licensees, suppliers, and competitors. We’re confident that we evaluate the technologies of customer service products by the most important, relevant, and actionable criteria. Our approach creates common, supplier-independent and product-independent analyses. These analyses enable the evaluation and comparison of similar customer service products and results in faster and lower risk selection of a product that best fits a set of requirements.

However, we have noticed that the descriptive content that are the bases for our analyses has gotten a bit lengthy and repetitive (repeating information in Customer Best Fit). We plan to tighten up Customer Service Technologies content and analysis in these ways:

  • Tables
  • Focused Evaluation Criteria
  • Consistent Analysis
  • Reporting

Too much narrative and analysis has crept into Tables. We’ll make sure that Tables are bulleted lists with little narrative and no analysis.

Evaluation criteria have become too broad. We’ve been including detailed descriptions and analyses of related and supported resources along with resources that’s the focus of the evaluation. For example, when we describe and analyze the details of a case model, we’ll not also describe and analyze the details of user and customer models. Rather we’ll just describe the relationship between the resources.

Our analyses will have three sections. The first will summarize what’s best about a product. The second will present additional description and analysis where Table content needs further examination. The third will be “Room for Improvement,” areas where the product is limited. This approach will make the reports more actionable and more readable as well as shorter.

In reporting, we’ll stop examining instrumentation, the collection and logging of the data that serves as report input. The presence (or absence) of reports about the usage and performance of customer service resources is really what matters. So, we’ll call the criterion “Reporting” and we’ll list the predefined reports packaged with a product in a Table. We’ll discuss missing reports and issues in instrumentation in our analysis.

Going Forward

Our Product Evaluation Report about Microsoft Dynamics CRM Online Service will be the first to be written on the streamlined Framework. Expect it in the next several weeks. Its Customer Service Best Fit section really is smaller. Each of its Customer Service Technologies sections is smaller, too, more readable and more actionable as well.

Here’s the graphic of our Product Evaluation Framework, reflecting the changes that we’ve described in this post.


Please let us know if these changes make sense to you and please let us know if the new versions of the Product Evaluation Reports that leverage them really are more readable and more actionable.

Framework for Evaluating Customer Service Products

This week’s report is a new version of our Framework for Evaluating Customer Service Software Products. We had two goals for its design. First, we wanted your evaluation, comparison, and selection processes to be simpler and faster. Second, we wanted shorter and more actionable Product Review Reports. The new Framework eliminates evaluation criteria that do not differentiate. For example, we no longer analyze and evaluate web content management for a product’s self-service and assisted-service UIs. These UIs have become a bit static. They’re configurable and localizable, but they’re no longer as customizable and manageable as they had been. The new Framework also decreases the number of factors (sub-criteria) that we consider within an evaluation criterion. For example, the Knowledge Management criterion now has two factors: Knowledge Model, and Knowledge Management Services. The previous version of the Framework examined these and six others.

We also added a criterion—Case Management. When we began evaluating customer service products back in 1993, we felt that case management, while a critical customer service process, was well understood, did not differentiate, and was not really customer-centric. We’ve changed our point of view. We still believe that the purpose for customer service is answering customers’ questions and solving customers’ problems. However, we also recognize that at the point in time that a customer asks a question or poses a problem you might not have an answer or solution available. You create a case to represent that question or problem, your process to resolve the case is a process to find or develop an answer or solution, and its resolution is, itself, the answer or solution. Our evaluation of case management considers four factors that focus on a product’s packaged services and tools for performing the tasks of the case management process. The process includes finding and using case resolutions in communities and social networks.

Customer Service Best Fit and Customer Service Technologies are the Framework’s two top-level evaluation criteria. Customer Service Best Fit presents information and analysis that classifies and describes customer service software products. Customer Service Technologies examines the implementation of a product’s customer service applications. The graphic below shows the Framework, its top-level criteria, and their sub-criteria.


We plan to use the Framework to evaluate every type of customer service product within our current research—case management, knowledge management, virtual assistant, and social network monitoring, analysis, and interaction. The Customer Service Best Fit criterion applies very nicely to any product. The application of the Customer Service Technologies criterion is product-type dependent. Look for our Product Review Report on Salesforce Service Cloud. It will be the first against the new Framework. Based on the draft of that report, the Framework works very nicely.