Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
Important
This page includes instructions for managing Azure IoT Operations components using Kubernetes deployment manifests, which is in PREVIEW. This feature is provided with several limitations, and shouldn't be used for production workloads.
See the Supplemental Terms of Use for Microsoft Azure Previews for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Tip
Data flow graphs offer an expanded mapping language with additional functions, composable transforms, and features like conditional routing and time-based aggregation. For new projects that use MQTT, Kafka, or OpenTelemetry endpoints, see Transform data with map in data flow graphs.
Use the data flow mapping language to transform data in Azure IoT Operations. The syntax is a simple, yet powerful, way to define mappings that transform data from one format to another. This article provides an overview of the data flow mapping language and key concepts.
Mapping allows you to transform data from one format to another. Consider the following input record:
{
"Name": "Grace Owens",
"Place of birth": "London, TX",
"Birth Date": "19840202",
"Start Date": "20180812",
"Position": "Analyst",
"Office": "Kent, WA"
}
Compare it with the output record:
{
"Employee": {
"Name": "Grace Owens",
"Date of Birth": "19840202"
},
"Employment": {
"Start Date": "20180812",
"Position": "Analyst, Kent, WA",
"Base Salary": 78000
}
}
In the output record, the following changes are made to the input record data:
- Fields renamed: The
Birth Datefield is nowDate of Birth. - Fields restructured: Both
NameandDate of Birthare grouped under the newEmployeecategory. - Field deleted: The
Place of birthfield is removed because it isn't present in the output. - Field added: The
Base Salaryfield is a new field in theEmploymentcategory. - Field values changed or merged: The
Positionfield in the output combines thePositionandOfficefields from the input.
The transformations are achieved through mapping, which typically involves:
- Input definition: Identifying the fields in the input records that are used.
- Output definition: Specifying where and how the input fields are organized in the output records.
- Conversion (optional): Modifying the input fields to fit into the output fields.
expressionis required when multiple input fields are combined into a single output field.
The following mapping is an example:
{
inputs: [
'BirthDate'
]
output: 'Employee.DateOfBirth'
}
{
inputs: [
'Position' // - - - - $1
'Office' // - - - - $2
]
output: 'Employment.Position'
expression: '$1 + ", " + $2'
}
{
inputs: [
'$context(position).BaseSalary'
]
output: 'Employment.BaseSalary'
}
The example maps:
- One-to-one mapping:
BirthDateis directly mapped toEmployee.DateOfBirthwithout conversion. - Many-to-one mapping: Combines
PositionandOfficeinto a singleEmployment.Positionfield. The conversion formula ($1 + ", " + $2) merges these fields into a formatted string. - Contextual data:
BaseSalaryis added from a contextual dataset namedposition.
Field references
Field references show how to specify paths in the input and output by using dot notation like Employee.DateOfBirth or accessing data from a contextual dataset via $context(position).
Metadata properties
When you use MQTT or Kafka as a source or destination, you can access metadata properties like topics, user properties, and headers in your mappings. For full syntax details and examples, see Metadata fields in the expressions reference.
Contextualization dataset selectors
These selectors allow mappings to integrate extra data from external databases, which are referred to as contextualization datasets. For details, see Contextualization datasets in the expressions reference and Enrich data by using data flows.
Record filtering
Record filtering involves setting conditions to select which records should be processed or dropped.
Dot notation
Data flow field paths use dot notation to reference nested fields, with escaping for special characters. For full syntax rules and examples, see Dot notation and escaping in the expressions reference.
Escaping
For rules on escaping dots and special characters in field paths, see Dot notation and escaping in the expressions reference.
Wildcards
Wildcards use the asterisk (*) to match multiple fields at once, which simplifies mappings when the output closely resembles the input. For full wildcard syntax, placement rules, multi-input wildcards, and specialization behavior, see Wildcards in the expressions reference.
Last known value
You can track the last known value of a property. Suffix the input field with ? $last to capture the last known value of the field. When a property is missing a value in a subsequent input payload, the last known value is mapped to the output payload.
For example, consider the following mapping:
inputs: [
'Temperature ? $last'
]
output: 'Thermostat.Temperature'
In this example, the last known value of Temperature is tracked. If a subsequent input payload doesn't contain a Temperature value, the last known value is used in the output.
Related content
- Expressions reference - Operators, functions, data types, and type conversion rules for all data flow transforms.
- Filter data in a data flow
- Enrich data by using data flows
- Create a data flow