For the analysis of the efficiency of the services delivered to the customer, on the various channels, what are the typologies of data to be collected?
The first step is already to have reliable information, a ‘photo’, flows on the various channels, Kpi’s on treatment, to follow the repetition of attempts or contacts, and to assess the global ‘closeness’ of the company. This can be achieved by collecting data from systems for receiving and processing requests, perhaps a call center or digital engagement solution, or a unified crm, to begin with.
But the study will necessarily also involve an analysis of the flow of requests, steps and ‘real’ processing times. By ‘real’ processing time we mean ‘until it is closed’ in the company’s information system, once all the internal activities that it is likely to have generated have been ensured, and so also ideally when the customer who requests the request will consider that it is satisfied, which we will validate by collecting and checking also his feedback.
For example, a request for a ‘box’ change or replacement of a part on a computer is of course not ‘managed’ when the agent qualifies the request and the customer receives a confirmation of support more or less detailed, but when the correct model for the replacement equipment is delivered and functional, including a new deadline induced by the teams managing the stock, the carrier, possibly an unsuccessful delivery due to lack of precision on the customer file, possible problems with the installation related to a lack of support or a missing part, etc … and then the company will still like to know how much has cost this service, and to master some basic aspects of profitability.
To evaluate the efficiency of the service, it is therefore necessary to analyze all the available traces on the activity and all the services involved, with as initial data correlated a typology of demand, and then work on notions such as resolution rates. real ‘first, second, or fifteenth contact, or bind a treatment that we would consider qualitatively according to our standards with’ the ‘standards, but also with the expectations, or feedback-client it generated, study finely the various stages of the treatment for the good ones as the bad examples, and of course to listen or re-read the contents of these interactions and the customer comments!
So we’re not just talking about, to really analyze the quality of service, data from the customer service tools?
Effectively. The term “360 ° analysis” of customer service is quite relevant, even if it is a little emptied of its current meaning by a slight use by some marketers of the sector. ‘360 °’ literally implies that one looks at the same time, or successively, ‘before’, ‘on the sides’ and ‘behind’. You can have fun translating this in front = the flows and the requests in my engagement tools, the sides = the perceived quality, the feedbacks on the sites of opinion, and my ‘hot’ or ‘cold’ barometers ‘, plus the evaluations made internally = the quality delivered. And then what is behind = the back office, but also the sales generated or the costs, so in general several sources emanating also from the IS of the company, like the ‘often-various-and-varied’ internal CRM.
The purpose of such an approach so in fact to try to trace all the courses, various and varied too, of my customers or prospects when they need me, as well as to track the dysfunctions internally or the difficulties teams. Therefore, from the point of view of the data, we need to have all the steps, and to analyze them in correlation with a maximum of internal or external feedbacks, consequently all the data of which we have just spoken are necessary.
We have even recently considered integrating a system of collecting agents’ feedback on their various customer exchanges in the space dedicated to them on http://www.itsolutionssolved.com.au/zoho-crm-consultant-melbourne/
with a view to creating a new source of the ‘climate’ type to enrich our campaigns analysis and goal tracking modules.
Data, to enable quality control are therefore necessarily reprocessed to be made digestible?
Yes, or rather some are, and others must be enriched, and no, we will never pass the skills of humans, including emotional, to assess the quality of a service!
Let’s start with the re-treatment. First of all, the data of the various application sources, to be included sometimes in the same report, are not all provided according to the same standards and some data supposed to be ‘raw’ or KPI’s are sometimes described as identical whereas their historization or calculation used filters or sometimes radically different methods. This is the “cleaning” work done by our analysts when we create a connector to several solutions of the same part of the customer relationship as for example the various commitment solution for one of our dashboards: it is necessary understand how each digit we will use has been produced, and compare or group what is comparable or “groupable”.
Computer processing is also needed to match the various formats from different tables or databases. Finally, it has become a dirty word, but we are also talking about BIG Data, many cases and interactions, and it would be humanly impossible for even small customer service to review each interaction one by one and perform a debrief on each case. So we have to heat the servers to extract the ‘substantive marrow’ data available to us, and today we can. For this we have more and more reliable calculators and semantic analysis tools, as well as tools to transform the voice into text, all this will help to better target the analyzes for the continuation of the process managed for its part by … the human!
The data must be enriched and their interpretation put on rails, and for that the human is best placed. As I would be more interested in reading a feedback on my new mobile application than getting the raw data of the number of downloads (who will tell me nothing about who and why and how many times the same user has downloaded it again, etc. …), I would also rather review an assessment made by my quality department on a call than listen to the call itself. The analysis of large masses of data, to help the decision, must therefore be based on data as ‘limpid’ as possible so as far as possible, having already undergone a processing AND human, to put these data in compliance, and start classifying them and giving them a color, or a climate.