Taking it at face value, you’d be forgiven for thinking Xref offers just a quick and simple solution to a very niche problem - reference checking inconsistency and inefficiency. But delve a little deeper and you’ll see what the platform offers that really makes us so proud - it brings to life the fact that there’s far more to the issue, and a much greater opportunity in resolving it, than you’d first imagine.
The more we expand our service and the number of markets we offer it to, the more we learn about the vast and impressive capabilities of our team and platform. The way the Xref architecture is built allows us to adapt and evolve for specific market needs, and our team has the experience and knowhow to make these changes a seamless success.
However, since launch, there’s been one lingering objection from potential new clients, as well as those using and otherwise delighted with the platform. It relates to tone of voice. There’s been a consistent unease that, by using an automated platform, those conducting reference checks won’t benefit from the tone of voice indicators they’d usually rely on.
Now, we’d argue tone of voice should never be relied on to support a hiring decision - the happiest, most positive sounding referee will often turn out to be the biggest liar! But we recognise the value in quickly understanding the sentiment behind a statement - we realised we just needed to find a tech-based solution to identifying it.
The Xref platform generates, on average, 60 per cent more data than you're likely to gather using traditional referencing methods. That’s a lot of words and insights! But the issue with increasing the amount of feedback, is the time it can take to read through it and the risk of misinterpretation, since it’s human nature that no one person will read a sentence in exactly the same way as another.
So we sought a way to give tone of voice a digital upgrade.
We used artificial intelligence (AI) to analyse the way sentences are constructed - stripping out the basic conjunctions, focussing on the relevance of the answer to the question, and ensuring the engine was able to recognise comments in context. We then spent eight months perfecting the algorithm and trained the engine to deliver an analysis of text to around 80 per cent accuracy.
Next, we ran some example questions with real life humans, asking them to rate the sentiment of some sample data, and feed it back into the AI engine to take its accuracy over the 100 per cent line. In July, we released the Xref Sentiment Engine onto the Xref platform, giving clients a breakdown of the sentiment in the reference responses they receive - offering positive, negative and neutral ratings at a glance.
The algorithm now works away in the background, removing assumptions, taking subjectivity out of reference reading, and ensuring no hiring decisions are made on the basis of a misinterpretation.
We found a big data-driven approach to overcome one of the most common objections we face, and offer clients another piece of insight to inform their hiring decisions. The Xref platform will continue to evolve and we’ll keep finding new ways to make sure we meet the expectations and aspirations of clients old and new, using the power of technology.