This example project demonstrates how to receive data from an HTTP endpoint, do some normalizations, and then publish the augmented data to an InfluxDB2 database.
It also includes visualization/dashboard examples using Grafana (which queries InfluxDB2).
Once the project is set up, an initial sync must be performed to deploy everything.
Essentially, the cloud state must sync up to the current state of the new repository which now has a cloned version of the template.
Syncing is always a manually initiated operation that's available whenever updates to the code (underlying repository) happen.
WARNING: These secrets exist to act as an authentication layer since some services are openly accessible to the entire internet; as such: DO NOT PICK WEAK PASSWORDS.
Upon syncing, there will be a prompt to set up some project-level secrets (passwords). Simply choose a secure password for each.
Note that once set, you cannot view the values again. This largely only matters for services like Grafana, where users will be required to directly enter them for access to the UI, so make sure you save the value somewhere.
Other services will reference these secrets directly in their project deployment configurations, so they do not need to be manually entered.
Upon first sync, it is normal that some applications may restart/error a few times while some of its dependencies are still starting up.
Applications should not need to restart more than 3-5 times before everything is up and running.
This project organizes deployments into service groups in the quix.yaml file.
Service groups allow you to logically categorize related services and manage them together.
| Group | Services | Purpose |
|---|---|---|
| (ungrouped) | Grafana, HTTP API Source, HTTP Data Normalization, InfluxDB2, InfluxDB2 Sink | Core pipeline services |
Example source |
OPC UA Server, OPC UA Source, HTTP Sink | Mock data generation for testing |
In quix.yaml, a service is assigned to a group using the group property:
deployments:
- name: OPC UA Server
group: Example source # <-- assigns this service to the "Example source" group
application: opc-ua-server
...Services without a group property are part of the main (ungrouped) pipeline.
This is the HTTP-based data ingestion and processing portion of the project:
These applications are only meant to simulate an external data source. They are grouped
together under Example source in quix.yaml since they work as a unit:
- OPC UA Server: Simulates an OPC UA server with sensor data
- OPC UA Source: Reads data from the OPC UA server and publishes to Kafka
- HTTP Sink: Sends the generated data to the HTTP API Source endpoint
These are standalone services, including an InfluxDB2 instance.
There are various things that can be tweaked, like the name of the InfluxDB database. Some will be configurable via environment, and others will require adjusting code as desired.
Regardless, everything in this template has predefined values except secrets, which will require defining upon deployment of this project (see setting secrets).
{
"srv_ts": 1753717885782747100,
"connector_ts": 1753717885792584200,
"type": "Double",
"val": 198.54935414815827,
"param": "T002",
"machine": "3D_PRINTER_2"
}The HTTP source will receive IoT events from a sensor (machine) that each contain a
value (val) for a given measurement (param), along with the timestamp it was
generated at (srv_ts).
In total, there are 2 different parameters: T001 and T002.
In this example, there is only 1 machine (3D_PRINTER_2).
We will normalize these events so that each parameter is no longer an individual event.
Instead, we aggregate across all parameters so that for a given machine, we get the
average of each parameter across 1 second (determined by the event timestamp, srv_ts).
This will result in a new outgoing aggregate event:
{
"T001": 97.20,
"machine": "3D_PRINTER_2",
"T002": 194.41,
"timestamp": "2025-07-28 15:52:51.600000"
}This aggregation is done using a Quix Streams tumbling_window operation, found in the
HTTP Data Normalization application.
These events are then pushed to InfluxDB2 to database my_bucket under measurement
printers (with machine as a tag).
my_bucket: printers
| T001 | T002 | timestamp (_time) | machine (_tag) |
|---|---|---|---|
| 97.20 | 194.41 | "2025-07-28 15:52:51.600000" | "3D_PRINTER_2" |
There is a simple Grafana dashboard included in the project.
Click on the blue link to log in to Grafana.
- username:
admin - password: whatever value
grafana_passwordwas set to when first setting up the template.
Then, navigate to the dashboards tab:
There is a simple Time Series graph and mean value gauge, each based on the selected time window.
You can select which column to view (sensor_1, sensor_2) for the given graphs.









