Frequently Asked Questions

How do our tools work?

CaliberAI's machine learning systems analyse text with respect to its syntactic, lexical and semantic features, before outputting a probability score the text belongs to one of our relevant categories (Defamatory and/or Harmful or Neutral).

The systems are almost entirely pre-trained on manually labelled data, carefully assembled by domain experts, with years of experience in managing public debate, so that they learn to recognise the language patterns of defamatory and harmful content.

More Frequently Asked Questions