Integrate ONE AI Models (ONNX / TensorFlow Lite)
This guide explains how to use the AI models generated by ONE AI after export. ONE AI can export models in ONNX and TensorFlow Lite formats, which can be integrated into virtually any application or platform.
Export Settings
When exporting your model, make sure to enable "Include Pre- and Postprocessing" in the export options. This simplifies integration by embedding all necessary preprocessing (normalization, resizing) and postprocessing (result interpretation) directly into the model.

Input Format
Single Image Input
When your model uses a single image as input, the input tensor has the following shape:
[1, 3, height, width]
| Dimension | Description |
|---|---|
1 | Batch size (always 1 for inference) |
3 | RGB color channels |
height | Image height in pixels (as configured during training) |
width | Image width in pixels (as configured during training) |
Data Type: float32 with values in range [0, 255] (when pre-processing is included)
Multiple Image Input
For models that compare multiple images (e.g., difference detection), the input tensor shape is:
[1, 3, height, width, image_count]
| Dimension | Description |
|---|---|
1 | Batch size (always 1 for inference) |
3 | RGB color channels |
height | Image height in pixels |
width | Image width in pixels |
image_count | Number of input images |
Image Order
When working with multiple input images, the dataset typically groups images by a base name with different suffixes (e.g., img1_rgb.png and img1_depth.png). Therefore, the model expects these images to be stacked in a specific order in the input tensor [..., image_count].
The input index order depends on the suffix names:
- Alphabetical Sorting: By default, suffixes are sorted alphabetically.
- Example: For
_rgband_depth,_depthis assigned Index 0 and_rgbIndex 1.
- Example: For
- Reference Priority: If a suffix suggests a reference image (suffix is "good", "temp", "ref", or "comp", case-insensitive), it is forced to Index 0, overriding alphabetical order.
- Note: If multiple images contain these keywords, those specific images are sorted alphabetically amongst themselves.
Output Format
The output format depends on the task type configured during training.
Classification
For image classification tasks, the output tensor has the shape:
[1, detected_classes, 2]
| Index | Description |
|---|---|
0 | Confidence value (0.0 - 1.0) |
1 | Class ID |
Example Output:
[[0.95, 0], [0.03, 1], [0.02, 2]]
This means: Class 0 with 95% confidence, Class 1 with 3% confidence, Class 2 with 2% confidence.
Object Detection
For object detection tasks, the output tensor has the shape:
[1, detected_objects, 6]
| Index | Description |
|---|---|
0 | X center (in pixels, relative to model input size) |
1 | Y center (in pixels, relative to model input size) |
2 | Width (in pixels) |
3 | Height (in pixels) |
4 | Confidence value (0.0 - 1.0) |
5 | Class ID |
Example Output:
[[128, 96, 64, 48, 0.92, 1]]
This means: Object of Class 1 at center position (128px, 96px) with size (64px × 48px) and 92% confidence.
The pixel coordinates in the output refer to the model's input dimensions, not your original image size. If you resize or crop your image before feeding it into the model, you need to transform the coordinates back to your original image space.
Segmentation
For semantic segmentation tasks, the output tensor has the shape:
[1, 1, height, width]
Each pixel position contains the predicted Class ID for that location.
Example: A 256x256 segmentation output would be a tensor of shape [1, 1, 256, 256] where each value represents the class at that pixel.
Integration Options
C# SDK (Recommended for .NET Applications)
We provide a ready-to-use C# SDK that handles all the complexity of model loading, inference, and result parsing:
C++ Project or Executable
For deployment on embedded systems, servers, or when you need maximum performance, ONE AI can export a complete C++ project or precompiled executable based on TensorFlow Lite:
Direct Integration
For custom integrations, you can use the standard ONNX or TensorFlow Lite runtimes:
ONNX Runtime
- Python: ONNX Runtime Python Documentation
- C++: ONNX Runtime C++ API
- C#: ONNX Runtime C# API
- JavaScript: ONNX Runtime Web
- Others: ONNX Runtime
TensorFlow Lite
- Python: TFLite Python API
- C++: LiteRT for Microcontrollers
- JavaScript: LiteRT Web
- Android: LiteRT Android Guide
- iOS: LiteRT iOS Quickstart