Measure data type

Official Content
This documentation is valid for:

Represents the main metric for evaluating a model in the context of GeneXus Artificial Intelligence for custom models.


  • ScoreScore, GeneXusAI
  • Additional (collection) -- Additional metrics
    • Key: VarChar(32)
    • Value: Numeric (10.5)
  • Local: Boolean


The Score field is the main metric based on your model type (e.g. for a classification model will be the F1-measure). The Additional field displays a set of additional measures that can help you to decide if your model meets your requirements. The following subsections describe some of these additional fields when metrics are locally calculated by GeneXusAI and their semantic.

Confusion Matrix key

The Confusion Matrix information is displayed in the Additional field as follows:

Key = ConfusionMatrix[{true-class},{predicted-class}]
Value  = {value}

where {true-class} is the class (or label) defined in the test-data and {predicted-class} is the class (or label) predicted by your trained model. The meaning of this output is that your model predicts {value} times that {true-class} was a {predicted-class}.

For example, if you have 'ConfusionMatrix[DOG,CAT]' key associated with 3 value, it means that your model predicts 3 times that a DOG was a CAT.

Macro Measures key

The Macro Measures information is displayed in the Additional field as follows:

Key {metric}@{threshold}
Value = {value}

being {metric} one of Accuracy, Precision, Recall or FScore; and {threshold} a numeric value between 000 and 100.The {value} is the macro-measure (average value) of every value defined by {metric} for each category when it exceeds the {threshold} value (otherwise, it counts as 0). 

For example, if you have three categories (DOG, CAT, PARROT), the value 0.897 associated with the 'F1Score@80' key means that 0.897 is the average of the F1-Score for DOG, CAT and PARROT which exceeds 80% (or 0.80) threshold.


  • Examples of additional metrics.
  • When you use a cloud-provider and it does not provide evaluation metrics of your model (e.g. IBM), the Local field is set to True and Measure's results were calculated locally. In order to execute a local evaluation your model must be deployed (i.e. Deploy procedure must be executed before call this task) because internally it will call to Predict procedure for each test-data in your dataset (available in Model.Dataset field).
  • The Main Score for an Image Classification problem will be the F1-Score (or the average over all thresholds).


AI Task Evaluate procedure
Platforms  Web(.NET,.NETCore,Java), SmartDevices(Android,iOS)


This data type is available as of GeneXus 16 upgrade 6.