-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathUSER_QOS_PROFILES~~.xml
More file actions
378 lines (348 loc) · 17.5 KB
/
USER_QOS_PROFILES~~.xml
File metadata and controls
378 lines (348 loc) · 17.5 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
<?xml version="1.0" encoding="UTF-8"?>
<!--
Description
An RTI Connext QoS Profile that provides high throughput
for streaming reliable data.
This profile depends primarily on:
Data writer:
- batch: combining multiple samples into a single network packet to
increase throughput
- protocol: send heartbeats to readers more frequently to cache levels low
Data reader:
- protocol: respond more aggressively to heartbeats with positive or
negative acknowledgements to speed up repairs of lost packets
Domain participant:
- Increased transport buffer sizes to efficiently send and receive many
large packets
-->
<!-- ================================================================= -->
<!-- Throughput QoS Profile -->
<!-- ================================================================= -->
<!--
Your XML editor may be able to provide validation and auto-completion services
as you type. To enable these services, replace the opening tag of this
document with the following, and update the absolute path as appropriate for
your installation:
<dds xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="/my_NDDSHOME_directory/resource/qos_profiles_myVersion/schema/rti_dds_qos_profiles.xsd">
-->
<dds>
<qos_library name="DefaultLibrary">
<!--
The HighThroughput profile is an extension of the Reliable profile.
RTI Connext provides APIs for loading multiple QoS
profile files at once, and referring from one to the other, but for
the sake of simplicity, we duplicate the Reliable profile here. For
more information about how it works, see the file reliable.xml.
-->
<qos_profile name="Reliable">
<datareader_qos>
<reliability>
<kind>RELIABLE_RELIABILITY_QOS</kind>
</reliability>
<history>
<kind>KEEP_ALL_HISTORY_QOS</kind>
</history>
<protocol>
<rtps_reliable_reader>
<min_heartbeat_response_delay>
<sec>0</sec>
<nanosec>0</nanosec>
</min_heartbeat_response_delay>
<max_heartbeat_response_delay>
<sec>0</sec>
<nanosec>0</nanosec>
</max_heartbeat_response_delay>
</rtps_reliable_reader>
</protocol>
</datareader_qos>
<datawriter_qos>
<reliability>
<kind>RELIABLE_RELIABILITY_QOS</kind>
<max_blocking_time>
<sec>5</sec>
<nanosec>0</nanosec>
</max_blocking_time>
</reliability>
<history>
<kind>KEEP_ALL_HISTORY_QOS</kind>
</history>
<resource_limits>
<max_samples>32</max_samples>
</resource_limits>
<protocol>
<rtps_reliable_writer>
<low_watermark>5</low_watermark>
<high_watermark>15</high_watermark>
<heartbeat_period>
<sec>0</sec>
<nanosec>100000000</nanosec>
</heartbeat_period>
<fast_heartbeat_period>
<sec>0</sec>
<nanosec>10000000</nanosec>
</fast_heartbeat_period>
<late_joiner_heartbeat_period>
<sec>0</sec>
<nanosec>10000000</nanosec>
</late_joiner_heartbeat_period>
<max_heartbeat_retries>500</max_heartbeat_retries>
<min_nack_response_delay>
<sec>0</sec>
<nanosec>0</nanosec>
</min_nack_response_delay>
<max_nack_response_delay>
<sec>0</sec>
<nanosec>0</nanosec>
</max_nack_response_delay>
<min_send_window_size>32</min_send_window_size>
<max_send_window_size>32</max_send_window_size>
</rtps_reliable_writer>
</protocol>
</datawriter_qos>
</qos_profile>
<!--
The HighThroughput profile extends the Reliable profile to perform
additional, finer-grainer performance tuning specific to applications
that send continuously streaming data. The parameters specified here
add to and/or override the parameters specified in the Reliable
profile.
-->
<qos_profile name="HighThroughput"
base_name="Reliable"
is_default_qos="true">
<datawriter_qos>
<writer_resource_limits>
<!--
The number of batches (not samples) for which the
DataWriter will allocate space.
The initial_batches parameter is also set here, indicating
to the writer that it should pre-allocate all of the space
up front. Pre-allocation will remove memory allocattion
from the critical path of the application, improving
performance and determinism.
Finite resources are not required for strict reliability.
However, by limiting how far "ahead" of its readers a
writer is able to get, you can make the system more
robust and performant in the face of slow readers and/or
dropped packets while at the same time constraining your
memory growth.
-->
<max_batches>32</max_batches>
<initial_batches>32</initial_batches>
</writer_resource_limits>
<!--
We're limiting resources based on the number of batches. We
could limit resources on a per-sample basis too if we wanted
to; we'd probably to set the value based on how many samples
we expect to be in each batch. Rather than come up with a
heuristic, however, it's more straightforward to override
this value to leave the value unlimited. (If you were to set
both, the first limit to be hit would take effect.)
-->
<resource_limits>
<max_samples>LENGTH_UNLIMITED</max_samples>
</resource_limits>
<protocol>
<rtps_reliable_writer>
<!--
Speed up the heartbeat rate. See reliable.xml for
more information about this parameter.
-->
<heartbeat_period>
<!-- 10 milliseconds: -->
<sec>0</sec>
<nanosec>10000000</nanosec>
</heartbeat_period>
<!--
Speed up the heartbeat rate. See reliable.xml for
more information about this parameter.
-->
<fast_heartbeat_period>
<!-- 1 millisecond: -->
<sec>0</sec>
<nanosec>1000000</nanosec>
</fast_heartbeat_period>
<!--
Speed up the heartbeat rate. See reliable.xml for
more information about this parameter.
-->
<late_joiner_heartbeat_period>
<!-- 1 millisecond: -->
<sec>0</sec>
<nanosec>1000000</nanosec>
</late_joiner_heartbeat_period>
<!--
The heartbeat rate is faster, so allow more time for
readers to respond before they are deactivated. See
reliable.xml for more information about this parameter.
-->
<max_heartbeat_retries>1000</max_heartbeat_retries>
<!--
Set the maximum number of unacknowedged samples
(batches) in the DataWriter's queue equal to the max
number of batches.
-->
<min_send_window_size>32</min_send_window_size>
<max_send_window_size>32</max_send_window_size>
</rtps_reliable_writer>
</protocol>
<!--
When sending very many small data samples, the efficiency of
the network can be increased by batching multiple samples
together in a single protocol-level message (usually
corresponding to a single network datagram). Batching can
offer very substantial throughput gains, but often at the
expense of latency, although in some configurations, the
latency penalty can be very small or even zero - even
negative.
-->
<batch>
<enable>true</enable>
<!--
Batches can be "flushed" to the network based on a
maximum size. This size can be based on the total number
of bytes in the accumulated data samples and/or the number
of samples. Whenever the first of these limits is reached,
the batch will be flushed.
-->
<max_data_bytes>30720</max_data_bytes><!-- 30 KB -->
<max_samples>LENGTH_UNLIMITED</max_samples>
<!--
Batches can be flushed to the network based on an elapsed
time.
-->
<max_flush_delay>
<sec>DURATION_INFINITE_SEC</sec>
<nanosec>DURATION_INFINITE_NSEC</nanosec>
</max_flush_delay>
<!--
The middleware will associate a source timestamp with a
batch when it is started. The duration below indicates
the amount of time that may pass before the middleware
will insert an additional timestamp into the middle of an
existing batch.
Shortening this duration can give readers increased
timestamp resolution if they require that. However,
lengthening this duration decreases the amount of
meta-data on the network, potentially improving
throughput, especially if the data samples are very small.
If this delay is set to an infinite time period,
timestamps will be inserted only once per batch, and
furthermore the middleware will not need to check the
time with each sample in the batch, reducing the amount
of computation on the send path and potentially improving
both latency and throughput performance.
-->
<source_timestamp_resolution>
<sec>DURATION_INFINITE_SEC</sec>
<nanosec>DURATION_INFINITE_NSEC</nanosec>
</source_timestamp_resolution>
</batch>
</datawriter_qos>
<participant_qos>
<!--
The participant name, if it is set, will be displayed in the
RTI Analyzer tool, making it easier for you to tell one
application from another when you're debugging.
-->
<participant_name>
<name>RTI Example (High Throughput)</name>
</participant_name>
<receiver_pool>
<!--
The maximum size of a datagram that can be deserialized,
independent of the network transport. By default, this
value is 9 KB, since that is a common default maximum
size for UDP datagrams on some platforms. However, on
platforms that support larger datagrams - up to 64 KB -
it's a good idea to increase this limit for demanding
applications to avoid socket read errors.
-->
<buffer_size>65536</buffer_size><!-- 64 KB -->
</receiver_pool>
<property>
<value>
<!--
Configure UDPv4 transport:
-->
<element>
<!--
On platforms that support it, increase the maximum
size of a UDP datagram to the maximum supported by
the protocol: 64 KB. That will allow you to send
the large packets that can result when you batch
samples.
-->
<name>dds.transport.UDPv4.builtin.parent.message_size_max</name>
<value>65536</value><!-- 64 KB -->
</element>
<element>
<!--
If possible, increase the UDP send socket buffer
size. This will allow you to send multiple large
packets without UDP send errors.
On some platforms (e.g. Linux), this value is
limited by a system-wide policy. Setting it to
a larger value will fail silently; the value will
be set to the maximum allowed by that policy.
-->
<name>dds.transport.UDPv4.builtin.send_socket_buffer_size</name>
<value>524288</value><!-- 512 KB -->
</element>
<element>
<!--
If possible, increase the UDP receive socket
buffer size. This will allow you to receive
multiple large packets without UDP receive errors.
On some platforms (e.g. Linux), this value is
limited by a system-wide policy. Setting it to
a larger value will fail silently; the value will
be set to the maximum allowed by that policy.
-->
<name>dds.transport.UDPv4.builtin.recv_socket_buffer_size</name>
<value>2097152</value><!-- 2 MB -->
</element>
<!--
Configure shared memory transport:
-->
<element>
<!--
Set the shared memory maximum message size to the
same value that was set for UDP.
-->
<name>dds.transport.shmem.builtin.parent.message_size_max</name>
<value>65536</value><!-- 64 KB -->
</element>
<element>
<!--
Set the size of the shared memory transport's
receive buffer to some large value.
-->
<name>dds.transport.shmem.builtin.receive_buffer_size</name>
<value>2097152</value><!-- 2 MB -->
</element>
<element>
<!--
Set the maximum number of messages that the shared
memory transport can cache while waiting for them
to be read and deserialized.
-->
<name>dds.transport.shmem.builtin.received_message_count_max</name>
<value>2048</value>
</element>
<!--
Increase the size of the string built-in size. This
configuration is only necessary for applications that
use the built-in types (such as Hello_builtin).
-->
<element>
<name>dds.builtin_type.string.max_size</name>
<value>2048</value>
</element>
</value>
</property>
</participant_qos>
</qos_profile>
</qos_library>
</dds>