Skip to main content

Configuring AMD Project

In this section, you will learn how to configure an Auto Model Development (AMD) project.

The process of configuring an AMD project involves the following steps:

StepsResults
1. Create AMD ProjectCreate an AMD project from the scratch.
2. Add Annotation DatasetsFeed the project with annotation files to form the training foundation.
3. Create and Validate a Dataset VersionValidate a dataset and link it to categories.
4. Create a New ExperimentConfigure and train your model.

Creating AMD Project

In this section, you will learn how to create an AMD project.

To create AMD project, do the following:

  1. Login to the platform.

  2. Click the AI module and then click the Auto Model Development sub-module.

    The Auto Model Development (AMD) project page is displayed.

    Accessing AMD

  3. On the AMD Project page, click the + icon to create a new AMD project.

    Accessing AMD

    The New Auto Model Development Project dialog is displayed.

  4. In the New Auto Model Development Project dialog box, type the name of the AMD project, and then click the Create button.

    Accessing AMD

A new AMD project is successfully created.

Configuring Annotation Dataset

In this section, you will learn how to prepare training datasets:

  • Create New Dataset: Build a dataset inside the platform by selecting files and categories.

  • Import Dataset: Use existing annotation datasets uploaded previously.

Both methods represent the same task: a dataset you can version, validate, and use for experiments.

Creating New Annotation Dataset

In this section, you will learn how to create a new annotation dataset.

Use this method when you want control over which files and categories go into your dataset.

To create a new annotation dataset, do the following:

  1. Login to the platform.

  2. Click the AI module and then click the Auto Model Development sub-module.

    The Auto Model Development (AMD) project page is displayed.

Accessing AMD

  1. On the AMD Project page, click the + icon to create a new AMD project.

Accessing AMD

The New Auto Model Development Project dialog is displayed.

  1. In the New Auto Model Development Project dialog box, type the name of the AMD project, and then click the Create button.

    Accessing AMD

    A new AMD project is successfully created.

  2. Select thew newly created AMD project and then click the Create button to start the process of creating a new annotation dataset.

    The New Annotation Dataset dialog box is displayed.

  3. In the New Annotation Dataset dialog box, type the name of the annotation dataset, and then click Next.

    The Select Annotation File dialog box is displayed.

  4. In the Select Annotation File dialog box, click the Filter icon to access various filters to find an appropriate annotation file or files.

    New Annotation Dataset
  5. In the Select Annotation File dialog box, use the following filters:

    1. Type name of the existing annotation file in the Search box.

    2. Select a Start Acquisition Date and End Acquisition Date.

    3. Click the Categories drop-down list to select one or more categories that represent the annotation or annotations.

    4. Click the Draw an Area of Interest button to draw an AOI on the map.


      New Annotation Dataset

      The platform shows annotation files in the Properties Filter section that match the selected filters.

  6. In the Select Annotation File dialog box, select the appropriate annotation files and then click Next.

    New Annotation Dataset

    The Select Annotation Category(s) dialog box is displayed.

  7. In the Select Annotation Category(s) dialog box, select the category and then click Create Dataset.

    New Annotation Dataset

    A new annotation dataset is successfully created as shown below:

    New Annotation Dataset

Importing Existing Annotation Dataset

In this section, you will learn how to import an existing dataset.

Use this method when your team has already prepared a dataset. Instead of recreating the dataset, you can simply import the dataset prepared by your team in AMD.

To import existing annotation dataset, do the following:

  1. Select the Annotation Dataset tab and then click the Import Dataset button.

    The Import Annotation Dataset dialog box is displayed.

  2. In the Import Annotation Dataset dialog box, type the name of the imported annotation dataset in the Annotation Dataset Name field and then proceed to select an existing annotation dataset in the Select Annotation Dataset drop-down list.

    New Annotation Dataset

    The platform successfully adds imports an existing annotation dataset.

    New Annotation Dataset

Creating New Version

In this section, you will learn how to validate a dataset and link it to categories.

Pre-requisites:

Prior to validating a dataset, ensure you have created minimum two annotation datasets as show below:

New Annotation Dataset

To create a new version, do the following:

  1. Select the Versions tab and then click the Create New Version button.

    New Annotation Dataset

    The Create New Version dialog box is displayed.

  2. In the Create New Version dialog box, type or select the following, and then click the Create and Validate button:

    FieldExample ValueDescription
    Version Names300_uae_detectionsA unique identifier for this version of the experiment or model training run.
    Select Annotation Datasets300_gulfThe annotation dataset linked to this version for training and validation.
    Comment (Optional)this is version number 1A short, descriptive note to provide additional context for the version.
    Select Modelyolov8The model architecture chosen for training (e.g., YOLOv5, YOLOv7, YOLOv8).
    Resolution-1Defines the input resolution. -1 usually means the default or auto-selection.
    Tile Size512Size (in pixels) for image tiling during training or inference.

    New Annotation Dataset

    The platform starts the process of validating the dataset and completes the process in a few minutes only.

    New Annotation Dataset
  3. Click the validated dataset to view details.

    New Annotation Dataset

Configuring Experiments

In this section, you can configure and train your model.

To configure experiments, do the following:

  1. Select the Experiments tab and then click the Create New Version button.

    New Annotation Dataset

    The Configure Experiment pop-up dialog box is displayed.

  2. In the Configure Experiment dialog box, type or select the following, and then click the Start Training button:

    FieldExample ValueDescription
    Experiment NameS300_ExperimentA unique name for the experiment. Helps identify and organize multiple runs.
    Dataset Versions300_uae_detectionsThe dataset version linked to this experiment for training and validation.
    Architecture Modelyolov8The deep learning architecture chosen for training (e.g., YOLOv5, YOLOv7, YOLOv8).
    No. of Epochs32Defines the number of training cycles. More epochs allow better learning but risk overfitting.
    Learning Rate0.001Controls how quickly the model updates weights. Lower = more stable, higher = faster but riskier. Recommended: 0.001–0.01.
    OptimizerAdam or SGDUsers can select Adam or SGD as the optimizer. Adam is recommended in most cases because it adapts the learning rate automatically and converges faster. SGD can be useful for large datasets or when reproducing research benchmarks, offering finer control and sometimes better generalization, but it typically requires careful tuning of learning rate and momentum.

    Once you click the Start Training button, the platform starts the process of training the dataset.

    New Annotation Dataset

Best Practices for Configuring an Experiment

These best practices highlight how to name, structure, and fine-tune your experiments so you can track progress easily and achieve consistent outcomes.

Experiment Naming
  • Use descriptive names that combine project, dataset, and purpose.
  • Avoid generic names like test1.
  • Include version numbers or dates for reproducibility.
  • Example: YOLOv8_UAE_v1.
Dataset Version
  • Always validate the dataset before linking it.
  • Ensure annotations are consistent and aligned with categories.
  • Document dataset version in experiment reports.
Architecture Model
  • Use the latest stable model (e.g., YOLOv8) for higher accuracy.
  • For lightweight runs or quick iterations, use YOLOv5.
  • Keep architecture consistent when comparing experiments.
Epochs
  • Default range: 20–50 epochs for most projects.
  • Small datasets: 10–20 epochs (avoid overfitting).
  • Large datasets: 50+ epochs, but monitor validation accuracy.
  • Rule of thumb: more data allows more epochs.
Learning Rate
  • Recommended starting point: 0.001.
  • Too high (≥0.01) → unstable training.
  • Too low (≤0.0001) → very slow convergence.
  • Adjust gradually; use schedulers if available.
Optimizer
  • Adam is recommended for most cases (balanced speed and accuracy).
  • SGD is useful when replicating research benchmarks or needing strict gradient control.
  • Keep the optimizer consistent for fair comparisons.
General Tips
  • Run a baseline experiment first with default hyperparameters.
  • Track experiments carefully (names, comments, or tools).
  • Save checkpoints regularly, especially for long runs.
  • Validate pipeline stability on small runs before scaling up.