Project File Format
The .oneai file is a JSON document that stores all configuration for a ONE AI project. It is created automatically when you set up a new project in ONE WARE Studio and updated as you modify settings through the UI.
File Structure
{
"type": "imageDetection",
"guid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"version": "1.2.3.4",
"data": { ... }
}
| Field | Type | Description |
|---|---|---|
type | string | Project type identifier. Currently "imageDetection" |
guid | string | Unique project identifier (UUID) |
version | string | ONE AI version that last saved the file |
data | object | All project configuration (see below) |
Data Object
The data object contains the full project state:
Core Settings
| Key | Type | Description |
|---|---|---|
annotationMode | string | "classes", "objects", or "segmentation" |
capabilityMode | string | "basic" or "advanced" |
fusionType | string | Image fusion mode: "single", "multi", "difference", "comparison", "stereo" |
multiImageMode | string | Selected multi-image processing mode |
advancedMultiImageSettings | object | Configuration for multi-image fusion |
Labels and Groups
"labels": [
{
"name": "Defect",
"id": 0,
"groupId": 0,
"color": 4278190335,
"excludeFromTraining": false
}
],
"groups": [
{ "name": "Default", "id": 0 }
]
| Field | Type | Description |
|---|---|---|
name | string | Display name |
id | int | Unique numeric identifier |
groupId | int | Parent group ID (labels only) |
color | uint | ARGB color as unsigned 32-bit integer |
excludeFromTraining | bool | Whether to exclude this label from training |
Hardware Settings
Defines the target deployment hardware. ONE AI optimizes model architecture for these constraints.
| Key | Type | Default | Description |
|---|---|---|---|
hardwareType | string | "FPGA" | Target hardware: FPGA, FPGA SoC, CPU, GPU, TPU, MCU, Server, ASIC |
prioritizeSpeedOptimization | bool | false | Prioritize inference speed over accuracy |
computeCapability | number | 1 | Available compute capacity |
computeCapabilityUnit | string | "TOPS" | Unit: TOPS, MOPS, KOPS |
dspBlocks | number | 40 | Number of 8-bit multipliers (FPGA DSP blocks) |
dspGroups | number | 1 | 8-bit multipliers with sum per DSP block |
prioritizeMemoryOptimization | bool | false | Prioritize memory efficiency |
memoryLimit | number | 1 | Available memory |
memoryLimitUnit | string | "GB" | Unit: KB, MB, GB |
optimizeForParallelExecution | bool | false | Enable parallel execution optimization |
quantizedCalculations | bool | false | Use quantized arithmetic |
bitsPerValue | number | 8 | Bit width for quantized values (2–32) |
fpgaClockSpeed | number | 50 | FPGA clock speed in MHz |
maximumMemoryUsage | number | 25 | Maximum memory usage (0–100%) |
maximumDspUsage | number | 25 | Maximum multiplier usage (0–100%) |
Prefilters
Prefilters are stored as arrays corresponding to pipeline stages:
"preFiltersBegin": [...],
"preFiltersBeforeAugmentation": [...],
"preFiltersAfterAugmentation": [...],
"preFiltersEnd": [...]
Each filter entry:
{
"id": "initialResize",
"isEnabled": true,
"settings": { ... }
}
| Field | Type | Description |
|---|---|---|
id | string | Filter type identifier |
isEnabled | bool | Whether the filter is active |
settings | object | Filter-specific parameters |
See Prefilters for available filter types and their parameters.
Augmentations
Augmentations are organized into three arrays:
"augmentationsBegin": [...],
"augmentationsStatic": [...],
"augmentationsDynamic": [...]
Each augmentation entry uses the same schema as prefilters:
{
"id": "move",
"isEnabled": true,
"settings": { ... }
}
Default static augmentations: mosaic, move, rotate, flip, resize.
Default dynamic augmentation: color.
See Augmentations for available augmentation types and their parameters.
Model Output Settings
Controls what the model predicts. Available keys depend on annotationMode:
| Key | Modes | Type | Default | Description |
|---|---|---|---|---|
classificationType | classes | string | "allIndividualClasses" | allIndividualClasses, oneClassPerImage, atLeastOneClass, regression |
predictionType | objects | string | "sizePositionClass" | sizePositionClass, positionClass, allPresentClasses, largestArea, mostObjects, atLeastOneObject |
segmentationType | segmentation | string | "oneClassPerPixel" | oneClassPerPixel, sizePositionClass, positionClass, allPresentClasses, largestArea, mostObjects, atLeastOneObject |
sizePredictionEffort | objects | number | 25 | Effort allocated to size prediction (0–100%) |
positionPredictionResolution | objects | number | — | Resolution for coordinate prediction |
precisionRecallPrioritization | all | number | — | Balance between precision and recall |
Model Input Settings (Advanced)
Fine-grained control over model architecture. These are automatically derived from Basic Mode settings unless overridden.
| Key | Type | Default | Description |
|---|---|---|---|
surroundingSizeMode | string | "relativeToObject" | Context sizing: relativeToObject or relativeToImage |
minRelativeSurroundingSize | number | 100 | Minimum surrounding context (%) |
maxRelativeSurroundingSize | number | 100 | Maximum surrounding context (%) |
estimatedMinWidth | number | 10 | Estimated minimum object width (% of image) |
estimatedMinHeight | number | 10 | Estimated minimum object height (% of image) |
estimatedAvgWidth | number | 50 | Estimated average object width (%) |
estimatedAvgHeight | number | 50 | Estimated average object height (%) |
estimatedMaxWidth | number | 90 | Estimated maximum object width (%) |
estimatedMaxHeight | number | 90 | Estimated maximum object height (%) |
detectComplexity | number | 50 | Feature detection complexity (0–100%) |
sameClassDifference | number | 50 | Intra-class variance (0–100%) |
backgroundDifference | number | 50 | Object-to-background contrast (0–100%) |
maxFeatures | number | 10 | Maximum features for classification |
avgFeatures | number | 2 | Average features for classification |
Model Input Settings (Basic Mode)
Simplified settings that auto-configure advanced model input parameters:
| Key | Type | Default | Description |
|---|---|---|---|
estimatedSize | string | "Small,Medium" | Object size categories: Tiny, Small, Medium, Big (comma-separated) |
numberOfFeatures | string | "FewFeatures" | AlwaysOne, FewFeatures, ManyFeatures |
typeOfEnvironment | string | "Controlled" | Controlled, Limited, Natural |
typeOfFeatures | string | "Limited" | Similar, Limited, Open |
Validation and Test Settings
| Key | Type | Default | Description |
|---|---|---|---|
useValidationSplit | bool | true | Enable automatic validation split |
validationSplit | number | 20 | Percentage of training data used for validation (0–100) |
testImagePercentage | number | 0 | Percentage of training images used for testing |
validationImagePercentage | number | 100 | Percentage of validation images used for testing |
validationImageSplitPercentage | number | 0 | Additional validation/test split |
Auto-Label Settings
| Key | Type | Description |
|---|---|---|
selectedAutoLabelModels | array | Models used for auto-labeling: [{ "name": "...", "minConfidence": 0.5 }] |
autoLabelMergeThreshold | number | Overlap threshold for merging auto-label predictions |
selectedRunModel | string | Name of the selected inference model |
Project Directory Structure
A .oneai project file sits alongside its associated data:
MyProject/
├── MyProject.oneai ← project configuration
├── Dataset/ ← training images and annotations
├── Models/ ← trained model files
└── Export/ ← exported models (ONNX, VHDL, TFLite)

Need Help? We're Here for You!
Christopher from our development team is ready to help with any questions about ONE AI usage, troubleshooting, or optimization. Don't hesitate to reach out!