Now we’ll retrain the model. You can choose between retraining texts from your uploaded file or upload annotated data as a new file.
Option 1: Annotate selected instances from the uploaded file
Select instances to annotate from the uploaded text
Select Action: MODEL
Select your model
Select Action: UPDATE
Select Mode: REQUEST and select how many instances you want to annotate.
Select sampler:
Random: Returns random text instances from your dataset
Margin RECOMMENDED : A metric to find uncertain instances for the underlying model
Then click the button “Request”.
Annotate selected instances
Now you want to annotate the requested instances. To do that, proceed as following:
Select Mode: LABEL
Select label default
SKIP - Allows you to annotate the post by yourself
Model prediction - Allows you to see the model’s prediction.
For each given post, select the most suitable label or the SKIP option to skip a post annotation.
Click “Update” to retrain the model. This process takes a little while. You may see a message “Model not ready. Come back later”.
You can repeat this procedure multiple times.
Option 2: Upload a file of annotated instances
Upload a file that has a column with post annotations
Select the text column
Select the label column (post annotations)
Click “Update” to retrain the model. This process takes a little while. You may see a message “Model not ready. Come back later”.
Estimated quality performance
After the model has been trained at least one time, the user will be able to monitor the evolution of the quality performance.
The platform provides a way to monitor model convergence. That is, it answers the question if annotating more will further improve the model.
This is done by means of a regression model that estimates a normalized F1-score on unseen test data.
With your model selected, select action INSPECT .
After each iteration a new point will be drawn on the "estimated quality performance" line plot. You can deduce from it that, when the curve starts to flatten, the model quality is converging, and probably it's not going to learn much more.
Evaluate model performance
Using a different dataset than the one the model was created with, you can evaluate the model’s performance.
Prepare your data file with the following columns:
Text: Your raw text instances
Label: Label annotations for each text
Select Action: EVALUATE
Upload your file
Select the text column and label column
As a result, you’ll see a table with different performance metrics and a list of text instances where the predicted label and the annotated label mismatch.