Now we’ll retrain train the model. You can choose between retraining training texts from your uploaded file or upload annotated data as a new file.
Table of Contents |
---|
Option 1: Annotate selected instances from the uploaded file
Select instances to annotate from the uploaded text
Panel | ||
---|---|---|
|
...
| ||||||
API Docs: https://api.symanto.net/active-learning/docs#/v2/request_instances_v2__model_id__instances_post |
Select
from the side navigationStatus title ModelModelS Select your model Select Action: from the drop-down
Go to the
tabStatus title Update Select Mode:
and select how many instances you want to annotate.Status colour Yellow title request Select sampler:
Random: Returns random text instances from your dataset
Margin
: A metric to find uncertain instances for the underlying modelStatus colour Green title recommended
Then click the button “Request”.
Request
...
Annotate selected instances
Now you want to annotate the requested instances. To do that, proceed as followingfollows:
Select Mode:
Status title Label Select label default
SKIP - Allows you to annotate the post by yourself
Model prediction - Allows you to see the model’s prediction.
For each given post, select the most suitable label or the SKIP option to skip a post annotation.
Click “Update”
Update
to retrain the model. This process takes a little while. You may see a message “Model "Model not ready. Come back later”later".
You can repeat this procedure multiple times.
Option 2: Upload a file of annotated instances
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
API Docs: https://api.symanto.net/active-learning/docs#/v2/update_model_v2__model_id__post |
Upload a file that has a column with post annotations
Select the text column
Select the label column (post annotations)
Click “Update”
Update
to retrain the model. This process takes a little while. You may see a message “Model "Model not ready. Come back later”later".
...
Model convergence estimation
After the model has been trained at least one time, the user will be able to monitor the evolution of the quality model performance.
The platform provides a way to monitor model convergence. That is, it answers the question if annotating more will further improve the model.
...
With your model selected, select action go to the
tab.Status title inspectInspect After each iteration, a new point will be drawn on the "estimated quality performance" line plot. You can deduce from it that, when the curve starts to flatten, the model quality is converging, and probably it's not going to learn much more.
If the curve is not stable, you have to continue training the model as it is still learning.
...
Evaluate model performance
Using a different dataset than from the one the model was created trained with, you can evaluate the model’s performance.
Prepare your data file with the following columns:
Text: Your raw text instances
Label: Label annotations for each text
Select Action: With your model selected, go to the
tab.Status title Evaluateevaluate Upload your file
Select the text column and label column
As a result, you’ll see a table with different performance metrics and a list of text instances (Errors) where the predicted label and the annotated label mismatch below it.
Info |
---|
Use a different dataset for evaluation than the one you used for training. |
...
...
Info |
---|
In case the results are not sufficient, there might be multiple reasons:
|
...