Classifier computes a score on a 1 to 5 scale for the content passed to it. A score of 5 would mean that the content is most likely violence while a score close to 1 would imply the content is safe to be published. In most cases, value of 4 is safe to go.
Examples
Take the following example, with a kitchen knife. It is sharp and can be misused, but if you are not the onion on the table, it seems 100% harmless and our API is intelligent enough to differentiate this situation from opposite.
Photo by Tree of Life Seeds on Unsplash
The response from the API is as expected.
{ "description": "Very unlikely contains violence", "value": 1 }
Now, there is a lady holding a cleaver for a show or a similar occasion. She is laughing so we get the idea that she is joking. It shouldn't be taken that serious.
Image by Juergen_G from Pixabay
The response from the API is as expected. Still no violence, but value = 2 means, there may be some chance of a crime.
{ "description": "Unlikely contains violence", "value": 2 }
Lastly, things are getting serious. The model wearing a pirate suite, is pointing the knife to the shutter with a shadowy face. So, this can be a sign for violence.
Image by Felix Lichtenfeld from Pixabay
The response from the API notes that this may be a possible violent picture.
{ "description": "Possible violence", "value": 3 }
For obvious reasons, we can't give examples from more violent scenes; such as zombies, killings, blood and gore. You can subscribe to the service and try for yourself in the Live Demo section.
Use cases
This API is useful for apps containing direct messaging features, as they often monitor content as it comes in. Rather than reviewing all content manually, Violence Detection API lets you automate the content approval process.
This API also works for cartoon images containining violence