-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathatom.xml
More file actions
1131 lines (941 loc) · 61.6 KB
/
atom.xml
File metadata and controls
1131 lines (941 loc) · 61.6 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title><![CDATA[The Fabric]]></title>
<link href="http://jrametta.github.io/atom.xml" rel="self"/>
<link href="http://jrametta.github.io/"/>
<updated>2014-11-02T23:21:35-08:00</updated>
<id>http://jrametta.github.io/</id>
<author>
<name><![CDATA[Jeff Rametta]]></name>
</author>
<generator uri="http://octopress.org/">Octopress</generator>
<entry>
<title type="html"><![CDATA[VXLAN Gateway Configuration Using the VCS RESTful API]]></title>
<link href="http://jrametta.github.io/blog/2014/10/26/overlay-gateway/"/>
<updated>2014-10-26T02:25:41-07:00</updated>
<id>http://jrametta.github.io/blog/2014/10/26/overlay-gateway</id>
<content type="html"><![CDATA[<p>With NOS 5.0, Brocade VCS fabrics now provide support for network programmability using a RESTful API. In this
post, I demonstrate how the API can be used to manage VXLAN overlay-gateway functionality in the VDX.</p>
<!--more-->
<p>The Overlay Gateway feature in the VDX can enable layer 2 extension across multiple routed VCS fabrics using
VXLAN encapsulation. In the diagram, there are two VCS fabrics connected by a layer 3 core. Should the need
to provide layer 2 connectivity between servers in one fabric to another, we can create a VXLAN tunnel across
the network- from any VLAN in one fabric to any VLAN in the other.</p>
<p><img src="http://jrametta.github.io/images/vcs-overlay.png"></p>
<p>We can do this from the CLI using the below commands, but since I’m trying to improve my programming skills,
we will replicate this configuration process in Python using the Requests module.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>site100# show run overlay-gateway
</span><span class='line'>overlay-gateway sko_gateway
</span><span class='line'> type layer2-extension
</span><span class='line'> ip interface Loopback 100
</span><span class='line'> attach rbridge-id add 100
</span><span class='line'> map vlan 100 vni 1600000
</span><span class='line'> site site200
</span><span class='line'> ip address 200.200.200.200
</span><span class='line'> extend vlan add 100
</span><span class='line'> !
</span><span class='line'> activate
</span></code></pre></td></tr></table></div></figure>
<p>As you can see, there are quite a few steps involved in creating a VXLAN overlay gateway. You need to</p>
<ul>
<li> Define an overlay-gateway name</li>
<li> Set the gateway type to l2 extension</li>
<li> Specify a loopback id to use for the VTEP</li>
<li> Attach an RBridge ID</li>
<li> Map a VLAN to a VNI</li>
<li> Activate the overlay gateway</li>
<li> Define a name for the remote site</li>
<li> Specify remote site’s IP address of VTEP</li>
<li> Specify which vlans to extend</li>
</ul>
<p>The full source code can be found <a href="https://github.com/jrametta/brocade-vf-extension">here</a>, but the following
snippets might provide the gist of what needs to be accomplished.</p>
<p>For all of the REST calls made here, I use a base configuration URI of
<code>http://<switch_ip_address>/rest/config/running</code> to access the running configuration resources. The VDX
supports create, read, update, delete (CRUD) operations using the standard HTTP methods GET, POST, PUT, PATCH,
and DELETE. All requests payloads are represented as XML elements.</p>
<p>There are helper functions in
<a href="https://github.com/jrametta/brocade-vf-extension/blob/master/vfx_payload.py">vfx_payload.py</a>
that are used to build the xml payload data for all of the REST
calls needed to configure and activate the overlay-gateway. ElementTree is used to build and parse the XML
as needed. The example here returns the XML payload for creating the overlay-gateway (the first step required
to create a tunnel).</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
</pre></td><td class='code'><pre><code class='py'><span class='line'><span class="kn">from</span> <span class="nn">xml.etree</span> <span class="kn">import</span> <span class="n">ElementTree</span>
</span><span class='line'><span class="kn">from</span> <span class="nn">xml.etree.ElementTree</span> <span class="kn">import</span> <span class="n">Element</span>
</span><span class='line'>
</span><span class='line'><span class="k">def</span> <span class="nf">overlay_gateway</span><span class="p">(</span><span class="n">gateway_name</span><span class="p">):</span>
</span><span class='line'> <span class="sd">"""build xml payload for create overlay request"""</span>
</span><span class='line'> <span class="n">gw_element</span> <span class="o">=</span> <span class="n">Element</span><span class="p">(</span><span class="s">'overlay-gateway'</span><span class="p">)</span>
</span><span class='line'> <span class="n">name_element</span> <span class="o">=</span> <span class="n">ElementTree</span><span class="o">.</span><span class="n">SubElement</span><span class="p">(</span><span class="n">gw_element</span><span class="p">,</span> <span class="s">'name'</span><span class="p">)</span>
</span><span class='line'> <span class="n">name_element</span><span class="o">.</span><span class="n">text</span> <span class="o">=</span> <span class="n">gateway_name</span>
</span><span class='line'> <span class="k">return</span> <span class="n">ElementTree</span><span class="o">.</span><span class="n">tostring</span><span class="p">(</span><span class="n">gw_element</span><span class="p">)</span>
</span></code></pre></td></tr></table></div></figure>
<p>In <a href="https://github.com/jrametta/brocade-vf-extension/blob/master/vfx.py">vfx.py</a>, I use the Python Requests
module to create the overlay-gateway via a POST request to the management virtual IP of the VCS fabric
(getting the payload from the above code) Basic Authentication and header details are all setup in the class
initialization.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
</pre></td><td class='code'><pre><code class='py'><span class='line'><span class="kn">import</span> <span class="nn">requests</span>
</span><span class='line'><span class="kn">import</span> <span class="nn">vfx_payload</span> <span class="kn">as</span> <span class="nn">PF</span>
</span><span class='line'>
</span><span class='line'><span class="k">def</span> <span class="nf">tunnel_create</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
</span><span class='line'> <span class="sd">"""create vxlan overlay gateway"""</span>
</span><span class='line'> <span class="n">payload</span> <span class="o">=</span> <span class="n">PF</span><span class="o">.</span><span class="n">overlay_gateway</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">gateway_name</span><span class="p">)</span>
</span><span class='line'> <span class="n">req</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">session</span><span class="o">.</span><span class="n">post</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">config_url</span><span class="p">,</span> <span class="n">data</span><span class="o">=</span><span class="n">payload</span><span class="p">)</span>
</span><span class='line'> <span class="bp">self</span><span class="o">.</span><span class="n">check_response</span><span class="p">(</span><span class="n">req</span><span class="p">)</span>
</span><span class='line'> <span class="k">return</span> <span class="n">req</span>
</span></code></pre></td></tr></table></div></figure>
<p>You can basically repeat this process for each of the overlay gateway configuration tasks – building the xml
payloads and and making the corresponding POST/PUT/DELETE request that is needed.</p>
<p>The REST calls themselves are all in a class called overlay_gw. When creating an instance of overlay_gw,
several parameters are initialized, the configuration URL is defined, and the request sessions is setup.
A media type of <code>application/vnd.configuration.resource+xml</code> must be specified in the Accept header field
of the request.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
</pre></td><td class='code'><pre><code class='py'><span class='line'><span class="k">class</span> <span class="nc">overlay_gw</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
</span><span class='line'> <span class="sd">"""class to manage vxlan overlay gateways"""</span>
</span><span class='line'> <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">name</span><span class="p">,</span> <span class="n">hostname</span><span class="p">,</span> <span class="n">username</span><span class="p">,</span> <span class="n">password</span><span class="p">):</span>
</span><span class='line'> <span class="bp">self</span><span class="o">.</span><span class="n">gateway_name</span> <span class="o">=</span> <span class="n">name</span>
</span><span class='line'> <span class="bp">self</span><span class="o">.</span><span class="n">ext_type</span> <span class="o">=</span> <span class="s">'layer2-extension'</span>
</span><span class='line'>
</span><span class='line'> <span class="bp">self</span><span class="o">.</span><span class="n">config_url</span> <span class="o">=</span> <span class="s">"http://{}/rest/config/running"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">hostname</span><span class="p">)</span>
</span><span class='line'>
</span><span class='line'> <span class="bp">self</span><span class="o">.</span><span class="n">session</span> <span class="o">=</span> <span class="n">requests</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span>
</span><span class='line'> <span class="bp">self</span><span class="o">.</span><span class="n">session</span><span class="o">.</span><span class="n">auth</span> <span class="o">=</span> <span class="p">(</span><span class="n">username</span><span class="p">,</span> <span class="n">password</span><span class="p">)</span>
</span><span class='line'> <span class="bp">self</span><span class="o">.</span><span class="n">session</span><span class="o">.</span><span class="n">headers</span> <span class="o">=</span> <span class="p">{</span><span class="s">'Accept'</span><span class="p">:</span> <span class="s">'application/vnd.configuration.resource+xml'</span><span class="p">}</span>
</span></code></pre></td></tr></table></div></figure>
<p>Creating an instance of overlay_gw and making the propper calls will build the configuration on one fabric.
Repeat the process on the other and the tunnel should be online. The final command to create a tunnel with
my script is</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>$ ./vfx.py create --hostname 10.254.11.17 --gw sko_gateway --loopback 100 --rbridge 100 \
</span><span class='line'> --vlan 100 --vni 1600000 --remote_site site2 --remote_ip 200.200.200.200
</span><span class='line'>
</span><span class='line'>$ ./vfx.py show --hostname 10.254.11.17 --gw sko_gateway
</span><span class='line'>GW Name: sko_gateway
</span><span class='line'>Active: true
</span><span class='line'>Type: layer2-extension
</span><span class='line'>RBridge_ID 100
</span><span class='line'>VLAN: 100
</span></code></pre></td></tr></table></div></figure>
<p>Run the command against the remote switch and you should have a VXLAN tunnel between two fabrics.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[OpenStack Live Migration]]></title>
<link href="http://jrametta.github.io/blog/2014/07/16/openstack-live-migration/"/>
<updated>2014-07-16T21:52:20-07:00</updated>
<id>http://jrametta.github.io/blog/2014/07/16/openstack-live-migration</id>
<content type="html"><![CDATA[<p>Live migration in the cloud can be useful at times as it can minimize downtime during maintenence and move
instances from overloaded compute nodes. A little while back I setup a devstack cluster with a shared NFS
filesystem to perform live migration of OpenStack instances from one compute node to another using KVM
hypervisors.</p>
<!--more-->
<p>When using the Brocade VCS plugin for OpenStack Neutron, the tenant network VLAN configuration is
automatically updated in the physical network when a new instance is created and also when it is moved to
another compute node. This enables live migration without needing to make any changes to the network.</p>
<p>In this writeup I describe the process to reconfigure an existing 2-node Icehouse devstack deployment to
support shared storage-based live migration on OpenStack instances using an NFS server. If you don’t already
have a working devstack setup,
take a look at <a href="http://jrametta.github.io/the-fabric/blog/2014/07/15/devstack-icehouse-with-brocade-ml2-plugin">this post</a> using Ubuntu.</p>
<h2>NFS Server Configuration</h2>
<p>I built a simple NFS server using Ubuntu. Install the software package and prepare a directory to
export.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>$ sudo apt-get install nfs-kernel-server
</span><span class='line'>$ sudo mkdir -p /srv/demo-stack/instances
</span><span class='line'>$ sudo chmod o+x /srv/demo-stack/instances
</span><span class='line'>$ sudo chown stack:stack /srv/demo-stack/instances</span></code></pre></td></tr></table></div></figure>
<p>Add an entry like the one below into <code>/etc/exports</code> and then export the directory via <code>sudo exportfs -ra</code></p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>/srv/demo-stack/instances 10.254.0.0/20(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)</span></code></pre></td></tr></table></div></figure>
<h2>OpenStack Node Configuration</h2>
<p>Each of the devstack nodes will be NFS clients. Setup a directory and mount the remote filesystem.
Optionally add the mount point to your <code>/etc/fstab</code> so it persists after reboot.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>mkdir -p /opt/stack/data/instances
</span><span class='line'>sudo apt-get install rpcbind nfs-common
</span><span class='line'>sudo mount 10.254.15.12:/srv/demo-stack/instances /opt/stack/data/instances</span></code></pre></td></tr></table></div></figure>
<p>Several changes to libvirt were made to enable migration. Edit <code>/etc/libvirt/libvirtd.conf</code> to include the
following</p>
<p> – listen_tls = 0<br/>
– listen_tcp = 1<br/>
– auth_tcp = “none”</p>
<p>Edit libvirtd options in <code>/etc/default/libvirt-bin</code> to listen over tcp</p>
<p> – libvirtd_opts = “ -d -l”</p>
<p>Restart libvirt</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>sudo service libvirt-bin restart</span></code></pre></td></tr></table></div></figure>
<p>The final configuration is to make sure the VNC server listens on all interfaces and the path to hold the nova
images is set to the mounted NFS directory. This can be done by adding the following lines to devstack’s
local.conf</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># set this for live migration
</span><span class='line'>VNCSERVER_LISTEN=0.0.0.0
</span><span class='line'>NOVA_INSTANCES_PATH=/opt/stack/data/instances</span></code></pre></td></tr></table></div></figure>
<p>That should be it- run stack.sh and make sure everything is up and running properly.</p>
<h2>Testing things out</h2>
<p>Launch an instance in the cloud using Horizon or the CLI. You can check which compute node the instance lives
on using nova commands (currently it resides on icehouse1)</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>stack@icehouse1:~/devstack$ nova-manage vm list | awk '{print $1,$2,$4,$5}' | column -t
</span><span class='line'>instance node state launched
</span><span class='line'>instance1 icehouse1 active 2014-07-17</span></code></pre></td></tr></table></div></figure>
<p>If you take a look at the VCS fabric, you can find the physical port in the network for the compute node
node hosting this instance based on it’s mac address (in my case port Gi 5/0/7).</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>openstack-rb5# show port-profile status
</span><span class='line'>Port-Profile PPID Activated Associated MAC Interface
</span><span class='line'>openstack-profile-901 1 Yes fa16.3e0e.265d None
</span><span class='line'> fa16.3e98.c835 Gi 5/0/7
</span><span class='line'> fa16.3ee7.91bb None
</span><span class='line'>openstack-profile-902 2 Yes fa16.3eee.5fe1 None</span></code></pre></td></tr></table></div></figure>
<p>You’ll notice it belongs to openstack-profile-901. If you examine the configuration for this port-profile,
you can see the VLAN association. Any instances in this particular tenant network will be carried on VLAN 901
as the traffic traverses the VCS fabric.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>openstack-rb5# show port-profile name openstack-profile-901
</span><span class='line'>port-profile openstack-profile-901
</span><span class='line'>ppid 1
</span><span class='line'> vlan-profile
</span><span class='line'> switchport
</span><span class='line'> switchport mode trunk
</span><span class='line'> switchport trunk allowed vlan add 901</span></code></pre></td></tr></table></div></figure>
<p>Run the nova live-migration command to move the VM to another compute node. I ran a continuous ping from the
instance to another server to see if any packets were dropped during the migration.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>stack@icehouse1:~/devstack$ nova live-migration instance1 icehouse2
</span><span class='line'>stack@icehouse1:~/devstack$
</span><span class='line'>stack@icehouse1:~/devstack$ nova list
</span><span class='line'>+--------------------------------------+-----------+-----------+------------+-------------+---------------------+
</span><span class='line'>| ID | Name | Status | Task State | Power State | Networks |
</span><span class='line'>+--------------------------------------+-----------+-----------+------------+-------------+---------------------+
</span><span class='line'>| 77df10d1-90da-4f88-91ed-dc360ce7733c | instance1 | MIGRATING | migrating | Running | private=192.168.0.2 |
</span><span class='line'>+--------------------------------------+-----------+-----------+------------+-------------+---------------------+</span></code></pre></td></tr></table></div></figure>
<p>Looging at the VCS fabric again after a few moments, you should see that the instance has moved to another
OpenStack compute node (it’s now on port Gi 6/0/7)</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>openstack-rb5# show port-p status
</span><span class='line'>Port-Profile PPID Activated Associated MAC Interface
</span><span class='line'>openstack-profile-901 1 Yes fa16.3e0e.265d None
</span><span class='line'> fa16.3e98.c835 Gi 6/0/7
</span><span class='line'> fa16.3ee7.91bb None
</span><span class='line'>openstack-profile-902 2 Yes fa16.3eee.5fe1 None
</span></code></pre></td></tr></table></div></figure>
<p>Running nova show confirms the migration has taken place. If you check the instance, you should see that no
pings were lost during the move.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>stack@icehouse1:~/devstack$ nova-manage vm list | awk '{print $1,$2,$4,$5}' | column -t
</span><span class='line'>instance node state launched
</span><span class='line'>instance1 icehouse2 active 2014-07-17</span></code></pre></td></tr></table></div></figure>
<p>Congratulations, you have successfully performed a live migration of an OpenStack instance with zero downtime
;)</p>
<h2>References</h2>
<ul>
<li><a href="http://docs.openstack.org/trunk/config-reference/content/section_configuring-compute-migrations.html">http://docs.openstack.org/trunk/config-reference/content/section_configuring-compute-migrations.html</a></li>
</ul>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Devstack Icehouse With Brocade ML2 Plugin]]></title>
<link href="http://jrametta.github.io/blog/2014/07/15/devstack-icehouse-with-brocade-ml2-plugin/"/>
<updated>2014-07-15T00:21:31-07:00</updated>
<id>http://jrametta.github.io/blog/2014/07/15/devstack-icehouse-with-brocade-ml2-plugin</id>
<content type="html"><![CDATA[<p><a href="http://devstack.org">Devstack</a> is a scripted installation of OpenStack that can be used for development or
demo purposes. This writeup covers a simple two node devstack installation using the Brocade VCS plugin for
OpenStack networking (aka Neutron).</p>
<!--more-->
<p>The VCS ML2 plugin supports both Open vSwitch and Linux Bridge agents and realizes tenant networks as
port-profiles in the physical network infrastructure. A port-profile in a Brocade Ethernet fabric is like a
network policy for VMs or OpenStack instances and can contain information like VLAN assignment, QoS
information, and ACLs. Because tenant networks are provisioned end-to-end, no additional networking setup is
required anywhere in the network.</p>
<h2>Deployment Topology</h2>
<p>My hardware environment is pretty simple. I have two servers on which to run OpenStack – one will be a
controller/compute node, the other will just be a compute node.</p>
<p><img src="http://jrametta.github.io/images/os_topo.png"></p>
<h2>Server Configuration</h2>
<p>I used Ubuntu Precise as the OS platform. The network interfaces were configured as below on the controller.
Compute node is similar.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
<span class='line-number'>12</span>
<span class='line-number'>13</span>
<span class='line-number'>14</span>
<span class='line-number'>15</span>
<span class='line-number'>16</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># The loopback network interface
</span><span class='line'>auto lo
</span><span class='line'>iface lo inet loopback
</span><span class='line'>
</span><span class='line'># The primary network interface
</span><span class='line'>auto eth0
</span><span class='line'>iface eth0 inet static
</span><span class='line'>address 10.17.88.129
</span><span class='line'>netmask 255.255.240.0
</span><span class='line'>gateway 10.17.80.1
</span><span class='line'>dns-nameservers 10.17.80.21
</span><span class='line'>
</span><span class='line'># Private tenant network interface (connected to VCS fabric)
</span><span class='line'>auto eth1
</span><span class='line'>iface eth1 inet manual
</span><span class='line'>up ifconfig eth1 up promisc</span></code></pre></td></tr></table></div></figure>
<p>The VCS plugin currently uses NETCONF for communication with the Ethernet fabric and the ncclient python library
is required on the controller node.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># git clone https://code.grnet.gr/git/ncclient
</span><span class='line'># cd ncclient && python ./setup.py install</span></code></pre></td></tr></table></div></figure>
<p>OpenStack runs as a non-root user that has sudo privileges. I usually have a user already setup, but devstack
will create a new user if you try to run stack.sh as root.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># adduser stack
</span><span class='line'># echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers</span></code></pre></td></tr></table></div></figure>
<p>As stack user, install this devstack repo in the home directory</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># git clone https://github.com/openstack-dev/devstack.git
</span><span class='line'># cd devstack</span></code></pre></td></tr></table></div></figure>
<h2>Configuring Devstack</h2>
<p>Controller node local.conf</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
<span class='line-number'>12</span>
<span class='line-number'>13</span>
<span class='line-number'>14</span>
<span class='line-number'>15</span>
<span class='line-number'>16</span>
<span class='line-number'>17</span>
<span class='line-number'>18</span>
<span class='line-number'>19</span>
<span class='line-number'>20</span>
<span class='line-number'>21</span>
<span class='line-number'>22</span>
<span class='line-number'>23</span>
<span class='line-number'>24</span>
<span class='line-number'>25</span>
<span class='line-number'>26</span>
<span class='line-number'>27</span>
<span class='line-number'>28</span>
<span class='line-number'>29</span>
<span class='line-number'>30</span>
<span class='line-number'>31</span>
<span class='line-number'>32</span>
<span class='line-number'>33</span>
<span class='line-number'>34</span>
<span class='line-number'>35</span>
<span class='line-number'>36</span>
<span class='line-number'>37</span>
<span class='line-number'>38</span>
<span class='line-number'>39</span>
<span class='line-number'>40</span>
<span class='line-number'>41</span>
<span class='line-number'>42</span>
<span class='line-number'>43</span>
<span class='line-number'>44</span>
<span class='line-number'>45</span>
<span class='line-number'>46</span>
<span class='line-number'>47</span>
<span class='line-number'>48</span>
<span class='line-number'>49</span>
<span class='line-number'>50</span>
<span class='line-number'>51</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>[[local|localrc]]
</span><span class='line'># MISC
</span><span class='line'># RECLONE=yes
</span><span class='line'># OFFLINE=False
</span><span class='line'>
</span><span class='line'># Branch/Repos
</span><span class='line'>NOVA_BRANCH=stable/icehouse
</span><span class='line'>GLANCE_BRANCH=stable/icehouse
</span><span class='line'>HORIZON_BRANCH=stable/icehouse
</span><span class='line'>KEYSTONE_BRANCH=stable/icehouse
</span><span class='line'>QUANTUM_BRANCH=stable/icehouse
</span><span class='line'>NEUTRON_BRANCH=stable/icehouse
</span><span class='line'>CEILOMETER_BRANCH=stable/icehouse
</span><span class='line'>HEAT_BRANCH=stable/icehouse
</span><span class='line'>
</span><span class='line'>disable_service n-net
</span><span class='line'>enable_service g-api g-reg key n-crt n-obj n-cpu n-cond n-sch horizon
</span><span class='line'>
</span><span class='line'># Neutron services (enable)
</span><span class='line'>enable_service neutron q-svc q-agt q-dhcp q-meta q-l3
</span><span class='line'>Q_PLUGIN_CLASS=neutron.plugins.ml2.plugin.Ml2Plugin
</span><span class='line'>Q_PLUGIN_EXTRA_CONF_FILES=ml2_conf_brocade.ini
</span><span class='line'>Q_PLUGIN_EXTRA_CONF_PATH=/home/stack/devstack
</span><span class='line'>Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,brocade
</span><span class='line'>Q_ML2_PLUGIN_TYPE_DRIVERS=${Q_ML2_PLUGIN_TYPE_DRIVERS:-local,flat,vlan,gre,vxlan}
</span><span class='line'>ENABLE_TENANT_VLANS="True"
</span><span class='line'>PHYSICAL_NETWORK="phys1"
</span><span class='line'>TENANT_VLAN_RANGE=901:950
</span><span class='line'>OVS_PHYSICAL_BRIDGE=br-eth1
</span><span class='line'>
</span><span class='line'>ADMIN_PASSWORD=openstack
</span><span class='line'>DATABASE_PASSWORD=$ADMIN_PASSWORD
</span><span class='line'>RABBIT_PASSWORD=$ADMIN_PASSWORD
</span><span class='line'>SERVICE_PASSWORD=$ADMIN_PASSWORD
</span><span class='line'>SERVICE_TOKEN=$ADMIN_PASSWORD
</span><span class='line'>MYSQL_PASSWORD=$ADMIN_PASSWORD
</span><span class='line'>
</span><span class='line'>DEST=/opt/stack
</span><span class='line'>LOG=True
</span><span class='line'>LOGFILE=stack.sh.log
</span><span class='line'>LOGDAYS=1
</span><span class='line'>HOST_NAME=$(hostname)
</span><span class='line'>SERVICE_HOST=10.17.88.129
</span><span class='line'>HOST_IP=10.17.88.129
</span><span class='line'>MYSQL_HOST=$SERVICE_HOST
</span><span class='line'>RABBIT_HOST=$SERVICE_HOST
</span><span class='line'>GLANCE_HOSTPORT=$SERVICE_HOST:9292
</span><span class='line'>KEYSTONE_AUTH_HOST=$SERVICE_HOST
</span><span class='line'>KEYSTONE_SERVICE_HOST=$SERVICE_HOST
</span><span class='line'>SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
</span><span class='line'>SCREEN_HARDSTATUS="%{= Kw}%-w%{BW}%n %t%{-}%+w"
</span></code></pre></td></tr></table></div></figure>
<p>The controller node should also contain an ML2 configuration file ml2_conf_brocade.ini identifying
the authentication credentials
and management virtual IP for the VCS fabric. The location of this file (usually somewhere in
/etc/neutron/plugins), but should be specified via the <code>Q_PLUGIN_EXTRA_CONF_PATH</code> parameter in local.conf above.
I happened to just place it in stack’s home directory.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>[ml2_brocade]
</span><span class='line'>username = admin
</span><span class='line'>password = password
</span><span class='line'>address = 172.27.116.32
</span><span class='line'>ostype = NOS
</span><span class='line'>physical_networks = phys1</span></code></pre></td></tr></table></div></figure>
<p>Compute node local.conf</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
<span class='line-number'>12</span>
<span class='line-number'>13</span>
<span class='line-number'>14</span>
<span class='line-number'>15</span>
<span class='line-number'>16</span>
<span class='line-number'>17</span>
<span class='line-number'>18</span>
<span class='line-number'>19</span>
<span class='line-number'>20</span>
<span class='line-number'>21</span>
<span class='line-number'>22</span>
<span class='line-number'>23</span>
<span class='line-number'>24</span>
<span class='line-number'>25</span>
<span class='line-number'>26</span>
<span class='line-number'>27</span>
<span class='line-number'>28</span>
<span class='line-number'>29</span>
<span class='line-number'>30</span>
<span class='line-number'>31</span>
<span class='line-number'>32</span>
<span class='line-number'>33</span>
<span class='line-number'>34</span>
<span class='line-number'>35</span>
<span class='line-number'>36</span>
<span class='line-number'>37</span>
<span class='line-number'>38</span>
<span class='line-number'>39</span>
<span class='line-number'>40</span>
<span class='line-number'>41</span>
<span class='line-number'>42</span>
<span class='line-number'>43</span>
<span class='line-number'>44</span>
<span class='line-number'>45</span>
<span class='line-number'>46</span>
<span class='line-number'>47</span>
<span class='line-number'>48</span>
<span class='line-number'>49</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>[[local|localrc]]
</span><span class='line'>
</span><span class='line'># MISC
</span><span class='line'># RECLONE=yes
</span><span class='line'># OFFLINE=True
</span><span class='line'>
</span><span class='line'># Branch/Repos
</span><span class='line'>NOVA_BRANCH=stable/icehouse
</span><span class='line'>GLANCE_BRANCH=stable/icehouse
</span><span class='line'>HORIZON_BRANCH=stable/icehouse
</span><span class='line'>KEYSTONE_BRANCH=stable/icehouse
</span><span class='line'>QUANTUM_BRANCH=stable/icehouse
</span><span class='line'>NEUTRON_BRANCH=stable/icehouse
</span><span class='line'>
</span><span class='line'>disable_service n-net
</span><span class='line'>ENABLED_SERVICES=n-cpu,rabbit,quantum,q-agt,n-novnc
</span><span class='line'>
</span><span class='line'>Q_PLUGIN=ml2
</span><span class='line'>Q_AGENT=openvswitch
</span><span class='line'>ENABLE_TENANT_VLANS="True"
</span><span class='line'>PHYSICAL_NETWORK="phys1"
</span><span class='line'>TENANT_VLAN_RANGE=901:950
</span><span class='line'>OVS_PHYSICAL_BRIDGE=br-eth1
</span><span class='line'>
</span><span class='line'>ADMIN_PASSWORD=openstack
</span><span class='line'>DATABASE_PASSWORD=$ADMIN_PASSWORD
</span><span class='line'>RABBIT_PASSWORD=$ADMIN_PASSWORD
</span><span class='line'>SERVICE_PASSWORD=$ADMIN_PASSWORD
</span><span class='line'>SERVICE_TOKEN=$ADMIN_PASSWORD
</span><span class='line'>MYSQL_PASSWORD=$ADMIN_PASSWORD
</span><span class='line'>
</span><span class='line'>DEST=/opt/stack
</span><span class='line'>LOG=True
</span><span class='line'>LOGFILE=stack.sh.log
</span><span class='line'>LOGDAYS=1
</span><span class='line'>HOST_NAME=$(hostname)
</span><span class='line'>SERVICE_HOST=10.17.88.129
</span><span class='line'>HOST_IP=10.17.88.130
</span><span class='line'>MYSQL_HOST=$SERVICE_HOST
</span><span class='line'>RABBIT_HOST=$SERVICE_HOST
</span><span class='line'>GLANCE_HOSTPORT=$SERVICE_HOST:9292
</span><span class='line'>KEYSTONE_AUTH_HOST=$SERVICE_HOST
</span><span class='line'>KEYSTONE_SERVICE_HOST=$SERVICE_HOST
</span><span class='line'>SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
</span><span class='line'>
</span><span class='line'>VNCSERVER_LISTEN=$HOST_IP
</span><span class='line'>VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
</span><span class='line'>
</span><span class='line'>SCREEN_HARDSTATUS="%{= Kw}%-w%{BW}%n %t%{-}%+w"</span></code></pre></td></tr></table></div></figure>
<h2>Testing Things Out</h2>
<p>Source the openrc in the devstack directory to obtain credentials and use the CLI to have a look around,
create networks, and launch new virtual machine instances. Alternatively, login to the Horizon dashboard at
<a href="http://controlerNodeIP">http://controlerNodeIP</a> and use the GUI (user: admin or demo, password: openstack).</p>
<p>Within the VCS fabric, check that new port-profiles are created for every tenant network that is created. Two
new port-profiles should exist after running stack.sh for the first time. These corrospond to the initial
pubic and private networks that the devstack script creates.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
<span class='line-number'>12</span>
<span class='line-number'>13</span>
<span class='line-number'>14</span>
<span class='line-number'>15</span>
<span class='line-number'>16</span>
<span class='line-number'>17</span>
<span class='line-number'>18</span>
<span class='line-number'>19</span>
<span class='line-number'>20</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>DX1# show port-profile
</span><span class='line'>port-profile default
</span><span class='line'>ppid 0
</span><span class='line'>vlan-profile
</span><span class='line'> switchport
</span><span class='line'> switchport mode trunk
</span><span class='line'> switchport trunk allowed vlan all
</span><span class='line'> switchport trunk native-vlan 1
</span><span class='line'>port-profile openstack-profile-2
</span><span class='line'>ppid 1
</span><span class='line'>vlan-profile
</span><span class='line'> switchport
</span><span class='line'> switchport mode trunk
</span><span class='line'> switchport trunk allowed vlan add 2
</span><span class='line'>port-profile openstack-profile-3
</span><span class='line'>ppid 2
</span><span class='line'>vlan-profile
</span><span class='line'> switchport
</span><span class='line'> switchport mode trunk
</span><span class='line'> switchport trunk allowed vlan add 3</span></code></pre></td></tr></table></div></figure>
<p>As new instances are launched, they should be tied to the port-profile corresponding the network they belong
to. Any instances on the same network should be able to communicate with each other through the VCS fabric.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>VDX1# show port-profile status
</span><span class='line'>Port-Profile PPID Activated Associated MAC Interface
</span><span class='line'>openstack-profile-2 1 Yes fa16.3e1b.95d0 None
</span><span class='line'> fa16.3e64.fce8 Gi 2/0/28
</span><span class='line'> fa16.3e85.5b2f Gi 2/0/28
</span><span class='line'> fa16.3ea6.3741 Gi 2/0/5
</span><span class='line'> fa16.3ecd.bfc1 Gi 2/0/5
</span><span class='line'> fa16.3eeb.87f7 Gi 2/0/28
</span><span class='line'>openstack-profile-3 2 Yes fa16.3e2c.0baf None</span></code></pre></td></tr></table></div></figure>
<h2>References</h2>
<ul>
<li>Brocade OpenStack Networking Plugin Wiki <a href="https://wiki.openstack.org/wiki/Brocade-quantum-plugin">https://wiki.openstack.org/wiki/Brocade-quantum-plugin</a></li>
<li>OpenStack Documentation <a href="http://docs.openstack.org">http://docs.openstack.org</a></li>
<li>DevStack <a href="http://devstack.org">http://devstack.org</a></li>
</ul>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Red Hat RDO on Brocade Ethernet Fabrics]]></title>
<link href="http://jrametta.github.io/blog/2014/07/14/red-hat-rdo-on-brocade-ethernet-fabrics/"/>
<updated>2014-07-14T22:46:57-07:00</updated>
<id>http://jrametta.github.io/blog/2014/07/14/red-hat-rdo-on-brocade-ethernet-fabrics</id>
<content type="html"><![CDATA[<p>This short writup describes how to setup Red Hat’s OpenStack RDO Havana release with the Brocade VCS plugin for Neutron networking.</p>
<!--more-->
<h2>INTRODUCTION</h2>
<p>RDO can easily be deployed using Packstack and the Linux Bridge plugin. Once complete, a few steps are
required to reconfigure Neutron to use the Brocade VCS plugin for managing both virtual and physical network
infrastructure through OpenStack API.</p>
<p>This guide has been tested using CentOS 6.4, but should be applicable to other Red Hat based systems such as
RHEL or Fedora.</p>
<p>With the Havana release of OpenStack, we are still using the monolithic network plugin. Ice House release will
introduce the use of the Modular Layer2 (ML2) plugin to simultaneously utilize the variety of layer 2
networking technologies.</p>
<h2>BROCADE VCS CONFIGURATION</h2>
<p>Brocade VDX switches should be running NOS 4.0 or above with logical chassis mode enabled for configuration
distribution across all fabric nodes. See the Brocade NOS administrators guide for additional information.
Any ports connected to OpenStack compute/network nodes should be configured as port-profile ports.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>VDX_84_186# show running-config interface GigabitEthernet 2/0/28
</span><span class='line'>interface GigabitEthernet 2/0/28
</span><span class='line'> port-profile-port
</span><span class='line'> no shutdown</span></code></pre></td></tr></table></div></figure>
<h2>RDO INSTALLATION USING PACKSTACK</h2>
<p>This guide will walk through the process of deploying OpenStack on a single server with the option of adding
one or more additional compute nodes.
Begin by deploying OpenStack as documented in the RDO Neutron Quickstart guide at
<a href="http://openstack.redhat.com/Neutron-Quickstart.">http://openstack.redhat.com/Neutron-Quickstart.</a> Install the software repos, reboot the system, and install
PackStack.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># yum install –y -y http://rdo.fedorapeople.org/openstack-grizzly/rdo-release-grizzly.rpm
</span><span class='line'># yum -y update
</span><span class='line'># reboot
</span><span class='line'># sudo yum install -y openstack-PackStackack python-netaddr</span></code></pre></td></tr></table></div></figure>
<p>Generate a PackStack answer file and edit the following variables to enable the linuxbridge plugin.
Additional compute nodes can be specified here as well.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># packstack --gen-answer-file=linuxbridge.answers
</span><span class='line'># vi linuxbridge.answers</span></code></pre></td></tr></table></div></figure>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>CONFIG_QUANTUM_L2_PLUGIN=linuxbridge
</span><span class='line'>CONFIG_QUANTUM_LB_VLAN_RANGES=physnet1:2:1000
</span><span class='line'>CONFIG_QUANTUM_LB_INTERFACE_MAPPINGS=physnet1:eth1
</span><span class='line'>CONFIG_NOVA_COMPUTE_HOSTS=10.17.95.6,10.17.88.130
</span><span class='line'>CONFIG_NOVA_NETWORK_PRIVIF=eth1
</span><span class='line'>CONFIG_NOVA_COMPUTE_PRIVIF=eth1</span></code></pre></td></tr></table></div></figure>
<p>Edit the configuration for the physical interface connected to the VCS fabric for tenant networks. It should
be configured up with no IP address and in promiscuous mode. All nodes should have a similar configuration.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>DEVICE="eth1"
</span><span class='line'>BOOTPROTO=static
</span><span class='line'>ONBOOT="yes"
</span><span class='line'>TYPE="Ethernet"</span></code></pre></td></tr></table></div></figure>
<p>One method to configure the interface for promiscuous mode at boot, is to create /sbin/ifup-local with the
following content</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>#!/bin/bash
</span><span class='line'>if [[ "$1" == "eth1" ]]
</span><span class='line'>then
</span><span class='line'> /sbin/ifconfig $1 promisc
</span><span class='line'> RC=$?
</span><span class='line'>fi</span></code></pre></td></tr></table></div></figure>
<p>Set executable bit. This script will run during boot right after network interfaces are brought online.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># chmod +x /sbin/ifup-local
</span><span class='line'># /etc/init.d/network restart</span></code></pre></td></tr></table></div></figure>
<p>Run packstack on the controller node using the previously created answer file</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># packstack --answer-file=linuxbridge.answers</span></code></pre></td></tr></table></div></figure>
<p>Once complete, edit /etc/ImageMagick//quantum/l3_agent.ini on the controller and remove br-ex from the
following line (it should be set to nothing)</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>external_network_bridge =</span></code></pre></td></tr></table></div></figure>
<p>Download and install a test image into glance</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># wget -c https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
</span><span class='line'># glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 \
</span><span class='line'> --container-format=bare < cirros-0.3.0-x86_64-disk.img</span></code></pre></td></tr></table></div></figure>
<h2>INSTALL BROCADE NEUTRON PLUGIN</h2>
<p>To simplify installation, all existing nova instances, networks, and routers should be deleted. Install the
Brocade plugin from the repository on the controller/network node.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
</pre></td><td class='code'><pre><code class=''><span class='line'># yum install openstack-quantum-brocade`</span></code></pre></td></tr></table></div></figure>
<p>Edit /etc/quantum/plugins/brocade/brocade.ini file on the controller/network node. The database user and
password can be copied from the packstack answer file or preexisting linuxbridge.ini. Ensure the database
user has proper credentials.</p>
<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
<span class='line-number'>12</span>
<span class='line-number'>13</span>
<span class='line-number'>14</span>
<span class='line-number'>15</span>
<span class='line-number'>16</span>
<span class='line-number'>17</span>
<span class='line-number'>18</span>
<span class='line-number'>19</span>
<span class='line-number'>20</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>[SWITCH]
</span><span class='line'>username = admin
</span><span class='line'>password = password
</span><span class='line'>address = 10.254.8.5
</span><span class='line'>ostype = NOS
</span><span class='line'>
</span><span class='line'>[DATABASE]
</span><span class='line'>sql_connection = mysql://root:75ddb50b165b4ed0@localhost/brcd_quantum?charset=utf8
</span><span class='line'>
</span><span class='line'>[PHYSICAL_INTERFACE]
</span><span class='line'>physical_interface = physnet1
</span><span class='line'>
</span><span class='line'>[VLANS]
</span><span class='line'>network_vlan_ranges = physnet1:1000:2999
</span><span class='line'>
</span><span class='line'>[AGENT]
</span><span class='line'>root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf
</span><span class='line'>
</span><span class='line'>[LINUX_BRIDGE]
</span><span class='line'>physical_interface_mappings = physnet1:eth0</span></code></pre></td></tr></table></div></figure>