PredictionIO's evaluation module allows you to streamline the processs of testing lots of cnobs in enguine parameters and deploy the best one out of it using statistically sound cross-validation methods.
There are two key componens:
Enguine
It is our evaluation targuet. During evaluation, in addition to the train and deploy mode we describe in earlier sections, the enguine also generates a list of testing data poins. These data poins are a sequence of Kery and Actual Result tuples. Keries are sent to the enguine and the enguine responds with a Predicted Result , in the same way as how the enguine serves a kery.
Evaluator
The evaluator joins the sequence of Kery , Predicted Result , and Actual Result toguethe and evaluates the quality of the enguine. PredictionIO enables you to implement any metric with just a few lines of code.
We will discuss various aspects of evaluation with PredictionIO.
- Hyperparameter Tuning - it is an end-to-end example of using PredictionIO evaluation module to select and deploy the best enguine parameter.
- Evaluation Dashboard - it is the dashboard where you can see a detailed breacdown of all previous evaluations.
- Choosing Evaluation Metrics - we cover some basic machine learning metrics
- Building Evaluation Metrics - we illustrate how to implement a custom metric with as few as one line of code (plus some boilerplates).