Quantcast
Channel: Tricky Deadline
Viewing all 125 articles
Browse latest View live

Dreaming Hyper-Converged Datacenters

$
0
0

Selling IT consultancy, hardware and software is still a door-to-door business at enterprise segments. Companies don’t get your services without a trustworthy face. Even service providers don’t get IT infrastructure without a struggling process among meetings.

On the other hand, AWS has changed how to consume IT. You will get the IT stuff you need immediately. The perfect service whether you don’t care about where your data is and the fact that you are sharing resources. Enterprise does care along with the necessity to lower costs and stay agile.

Can you just pick a datacenter in the city of your choice? Get the configuration of hardware as you pleased to build whatever you need, no matter how much capacity or speed are you looking for.

Manufacturing and delivery will converge into one concept: Hyper-Converged Datacenters (HDCs).

As soon as you have confirmed your online purchase, your configuration will be manufactured and delivered in seconds, configurations of any scale of compute and storage among different sites.

3D printers would enable hardware locally at industrial scales.

19’’ racks will make way to other type of standards. Smaller cubes will bring much higher densities of power. Cubes would act as 3D puzzle pieces that can be attached at any direction. It would be giving space to better ways to save energy, water and space.

HDCs will be distributed in tens of small facilities centrally managed into every zone. Facilities wouldn’t be separated away each other for more than 60 miles. That would make easier to get appropriate locations for you data processing based on your proximity.

Hardware deployments will be absolutely orchestrated by software. Humans wouldn’t touch any piece. Cabling will be solve is matter of seconds through a built-in fiber optic circuit as part of the cube’s structure.

You would be able to buy online software along with the hardware. Software defined techs help to add or remove hardware with no disruption (Note 1).

At last, your platform is done. A couple of clicks away to enable users to access the service catalog. And what AWS has built through out years, you’d get it in seconds and about 10% of the cost.

See you!

Note 1: Software products are embracing standards to run on commodity hardware. IT vendors are openly defining communities around these standards. Vendors are contributing into some open source projects forcing them to stay competitive.



Should I use OpenStack?

$
0
0

OpenStack is just an ingredient. If you dare to use it, you’d deal with other IT pieces like hyper-visors, storage management and databases.

Why do you think you need it? Is getting your IT stuff on time worth? If so, jump in to the next lines.

Do you have security concerns what country your data is processed in? If so, better fill that gap from a web delivery subscription service.

Are your infrastructure requirements growing in a steady pattern along with your budget stress? Otherwise, get along with buddies like AWS and Azure. Are you still worried that your data would be taken off to other Geos? Sorry, but you cannot have everything all at once… most of the time.

After this small review, OpenStack should be a good option.

Distributions from Canonical (Ubuntu), Redhat and Mirantis come along with delivery and managed services. Consultancy’s costs amount to millions depending on the complexity of the solution. Managed would be around $500/month per physical node.

Taking orchestration to basis with Nova, it would be enough for most of the use cases. It will save you tons of bucks. Nova will require Keystone and Glance along with a database as MySQL and a message broker as RabbitMQ. Finally, a Hypervisor like KVM would be the last ingredient.

A resilient configuration doesn’t deserve less than 4 physical servers and a couple of 10Gbps switches. A total hardware composition shouldn’t be amounted to less than $200K.

Complex business apps together with their automated tasks will require advanced orchestration features. Ingredients like Swift, Cinder, Heat and Neutron would come up among some other pieces of software like openvswitch, ceph, corosync and peacemaker.

Traditional redundant architectures will make way to more scalable and resilient configurations. A “design-to-fail” practice, started by Netflix after the largest AWS breakdown, will make you think twice rely your App availability just on the platform (note 1).

Developers and IT operators are converging into an evolving movement called DevOps. It brings a more agile IT service delivery to your organization, and the only way to keep under control any OpenStack initiative. DevOps require new culture conditions that could take years to put to work. Also, you can get them on-demand paying in average $100/month per instance.

should I use Openstack Mauricio Rojas Costs OPEX CAPEX

OpenStack is pushing hard its vision of “Powered Planet”, a federated identity among Geos and Vendors. It will enable your organization to amazing ways of collaboration and interoperability. A world of distributed computing that even companies like AWS and Microsoft are so far to achieve. This is the power of a community and their believers. A power that save money and make customers happier.

See you!

Note 1: Lessons Netflix Learned from the AWS Outage.


Timely drives innovation

$
0
0

Startups are being the way to innovate nowadays. Organizations fail to enable it as an in-house ordinary process, though they have the talent and experience. Failures come most from lack of culture of embracing the change. Seed capital and acquisitions are being the workaround.

Having ideas is easier than changing a paradigm like “It works, don’t touch it”

Developing new products it’s hard even with the sponsorship from the board. CFO office will give you the bucks to fire this project up, but not before you bring a solid business case.

At this time, you have the perfect conditions to make this happens: sponsorship, money and dedicated resources. Still there is not guaranteed success.

No matter how many mistakes, and how bad they are, whether you solve them timely. The team has to work thinking that everyday counts like it’s their last. Some practices will keep the sense of urgency alive:

  • Define weekly reviews, and objectives, though not losing the final target.
  • Meetings are not just for planning. They are part of the execution.
  • Ideas and meetings are worthless whether you don’t end them up in actions and deadlines.
  • And deadlines are “committed in blood” at the time the team has agreed with them.

See you soon!


OpenDayLight focuses on OpenStack Networks and Lithium is the proof

$
0
0

Technical Level: Medium
OpenStack Knowledge: Medium (Architect)
OpenDayLight Knowledge: Fundamentals

The Universal OpenStack adoption has changed the way to develop new datacenter’s services. Questions like: How much storage capacity and bandwidth does this new App need? are being addressed by an emerging role of Cloud Architects. IT Storage/Network Architects, who used to care of it App by App, are facing orchestration challenges as their most important priority now. In other words, how many storage/network services can be directly available from the orchestration dashboard to the end-user.

The time when custom-designed hardware used to bring an important competitive advantage is almost vanished. Most of advanced datacenters and cloud services are reduced to white commodity boxes (check out my note about DSSD as one exception to it: Don’t try to attach a horse hitch to a tesla ). Software is the superior brain managing all resources, making the most of them, bringing a highly responsive analytic as never seen before. Just check out how Google’s got the perfect combination between scale and speed for their networks at Pulling Back the Curtain on Google’s Network Infrastructure (one petabit/sec of total bisection bandwidth)

IT Resources used to be defined depending on every App performance/capacity needs. Now, their definition is based on the compatibility with orchestration platforms like OpenStack, even before to go to the App’s needs. Storage decisions are being reduce to opt between block or object. Cinder or Swift. Throughput tests are being done from instances to compare price/performance among different techs. Therefore, who cares what pieces of hardware are backstage? if you get the performance that your App needs from the very instances.

Even so, Network techs goes on ahead of storage in this history of change. SDN started its development much earlier than SDS (Software Defined Storage). And since NFV, a very awaited component among cloud rock-stars, SDN started to be a critical resource to manage, scale and control all these chained communication services.

Avoid vendor lock-in to stay competitive. Open standards make honest vendors. Most of the SDN vendors/projects has decided to move “OpenStack Network Virtualization” up to its first priority in the roadmap. OpenDayLight is not the exception. Besides, Service Function Chaining has been the core use case, ODL’s board has also decided to concentrate efforts into Neutron and OVS-DB.

A flat network virtualization management only based on Nova has been left behind years ago. Multi-layers App topologies with important needs in network functions have been stressing these projects and their supporters. Neutron, formerly called Quantum, has been enriched along with the growth of NFV’s adopters.

A warm OpenStack integration from OpenDayLight Helium

Dave Neary gave us away a short guide to implement “OpenDaylight and OpenStack” in the last OpenStack Summit in Vancouver. After a small review about ODL Helium, we were taken into a OpenVSwitch ML2 Mechanism overview based on Networking in too much detail from RedHat. You might like take a look at my There’s real magic behind openstack neutron notes, before go to the next lines.

Later Dave took us into the ODL Southbound interface to OVSDB and brought differences versus to the default nova/neutron settings. ODL Helium is removing BR-TUN (Bridge Tunnel) from every node. VXLAN encapsulation happens into the BR-INT instead. We’ve got a glance that Lithium removes “L3 Agent” and BR-EXT from neutron nodes and gets parity with Neutron features like LBaaS.

Next pictures remarks ODL’s features that work with OpenStack Juno

OpenDayLight Helium OpenStack Juno Neutron OpenVSwitch OVSDB

Drawbacks: Migrate from OVS/Neutron default settings to ODL is absolutely disruptive. You will have to delete every configuration. Security groups cannot be centrally managed by ODL yet. However, ODL will bring an important change in Lithium to manage Security rules in Neutron, see next section.

Amazing progress of Lithium to get OpenStack’s Neutron features parity.

Below you can see a picture with details about what is new in the Lithium release and how it works with OpenStack. ODL’s Neutron & OVSDB Services increased feature parity with Neutron including support for LBaaS, Distributed Virtual Router, NAT, External Gateway and Floating IP support. Also, there some improvements in performance & stability in this integration.

opendaylight lithium openstack kilo pinrojas neutron OVS OVSDB

Next picture shows how DLUX automatically creates a graphical topology based on the info received from every openflow switch. VTN manages all OVS at control and compute OpenStack nodes. Even the underline switches used as data plane to connect these nodes can be configured by VTN Manager.

Dlux-topology-OpenStack-Openflow-Lithium-Dasboard-VTN-Manager-Kilo

Although Nova Security Groups are not centrally managed, Lithium supports the integration with OpenStack Neutron Group Based Policy since its first release in Juno. OpenStack’s GBP also is supported by vendors-specific policy drivers from Nuage Virtual Services’ Platform, Cisco ACI and One Convergence Network. Honestly, I don’t believe Nova security groups will be eventually supported by ODL. Instead, GBP offers a most advanced and powerful option to it.

OpenStack’s GBP simplifies the job for security operators to apply rules based on the App’s profile and avoid direct interaction with the underlying network infrastructure. Also OpenStack’s GBP describes the chaining of multiple network services to speed up App development. It makes operators, security and App teams get better along.

Among other improvement we can find that ODL’s VPN Service has direct Integration with Openstack Neutron (APIs adhering to Openstack BGP VPN blueprint). ODL’s AAA component (Authentication, Authorization and Accounting) has also got integration with Keystone for Federation a SSO functions.

References worth to check out:


Thinking to migrate your stuff to open-source cloud stacks?

$
0
0

Technical Level: Basic
OpenStack Knowledge: Basic

No doubts that OpenStack is the preferred orchestration platform among super cloud’s users. If you are figuring out how to lower IT operation costs, this post could help you out to understand its complexity and benefits.

Before implement any open platform stack, check out what Apps are candidates to move out into it. There isn’t such thing as a perfect recipe to identify candidates. You could try to identify Apps prone to run on an open solution like KVM. Despite the App would run properly without issues, App’s vendors could make you change your mind on your try.  Most of the vendors wouldn’t give you any support or would make your licence costs increase shamelessly, if you don’t use their property stack as the underlay infrastructure.

A tech-junkie like me, loves to put to work techs like these, just to see how what happens. Nevertheless, the change is good only if you get savings. After you identify the candidates, you would run maths to check out if you will effectively get a reduction in your operation costs. Despite OpenSource is for general usage at no cost, you will need a talented tech team to maintain it. Talent is not cheap (I am not talking about just the salary).

Assessment: Identify Candidates

Let’s start with the next questions. They would help out to gather the required information to begin:

  1. What Applications, versions and type of licenses do you have in your Company?
  2. What Databases, versions and type of licenses have every application that you’ve just named?
  3. What Operating Systems, versions and type of licenses do you have in each server instance belong to your application?
  4. How much memory and disk do you have at every server regarding to every application?

The idea is to build up a table as the following:

migration open platform cloud stack openstack applications

As we can see, three of these apps are candidates to migrate to an open platform: the blue filled rows. The red filled ones don’t need a further explanation: take apps to a web delivery subscription model is the best option to any business by far, unless industry regulations don’t let you use a public service that shares resources between customers.

Choose an OpenSource Hyper-Visor is not driven exclusively by a tech factor

The green filled row form the previous table is a case worth to check. Some vendor’s apps can not entirely works on an open platform stack. Even they can run without technical issues on KVM, some vendors ask to use their own hyper-visor tech or an enterprise-licensed like ESX. If you don’t meet this requirements, they won’t bring you the technical support that you need. SAP obligates you to work with ESX and other tech conditions. SAP is know for this inflexibility, and besides this could be a important factor to guarantee the success of any of their implementations, it’s also the main reason that customers are trying to scape out of it and moving out to a web app delivery subscription model (SaaS).

Similar cases are find with vendors like Oracle. Oracle databases can not run on KVM, or you will be charge for the entire number of physical CPU cores. The best way to avoid this obscene cost, is using OVM (Oracle Virtual Machine), a modified Xen open-source project. This makes you to buy Oracle Linux, the only distribution that come with OVM. But Oracle Linux is cheaper if you buy Oracle hardware to run it on, and so on. It’s a trap that starts up from the Hyper-Visor. Either SQL server, and its enterprise license, requires Hyper-V to avoid prohibited costs.

Other cases are specific to linux’s distributions. RedHat could work perfectly into KVM, but you have to use their KVM distribution to be allowed to get their updates as part of the maintenance subscription.

Finally, an important number of app’s vendors force you to stay on their preferred compute virtualization tech. The type of hyper-visor depends on the type of application, database and operating system running on the instances. Choosing the Hyper-Visor based on performance metric is not the correct way anymore. Hyper-Visor is commodity stuff and Orchestration systems are also doing their job to keep them out of sight. Nevertheless, Apps and DBs license costs are the most important factor to evaluate candidates to migrate to open platforms.

Defining other components in the open platform stack

As soon as you’ve defined what hyper-visor, then you would know how many OpenSource projects you will be using in your stack.

If you’ve defined to use KVM (i.e. using the Canonical’s distribution), then you can use the Ceph project as the storage tech component in the stack. Ceph project also works with OpenStack Cinder. Ceph project could be used under the GPL, even if the KVM is not under the same license model. Ceph is one of the most used Storage project among OpenStack users.

OpenVSwitch is the other key tech component. OVS would be use for network virtualization (NFV). OVS works with OpenStack Neutron. Maybe you can add an SDN controller based on OpenDayLight. Lithium, the last release of ODL has a perfect parity with all Neutron’s features, including Group Based Policies (see my previous post for further details)

Back to our green-filled row. If you need to work with comercial Hyper-Visors like KVM or Hyper-V, your options to use Open Source will be limited. Ceph is not an Storage tech option with any of them. You can use traditional arrays connected through tradional protocols like iSCSI or NFS. A good option to keep the modular concept of white boxes could be EMC ScaleIO or VMWare VSAN. Both techs could be a perfect match with ESX. The same situation happens with ESX’s Network Virtualization. VMWare NSX should be the preferred network component for ESX.

The type of Hyper-Visor that you finally choose will define the rest of the tech components in the stack. Depending on that you would go into pure or partially open source platform.

Migration plan

Next picture shows the migration steps from the identification of the candidates to the optimization of the resources.

migration cloud private stack opensource openstack open platform timing methology

Timing is estimated. The required timeframe to build up your open platform depends on the availability of the hardware, network connections, datacenter conditions and the experience of the technical team. Later on, there are important tasks like to the leverage cloud: get advantage of the cloud’s orchestration and automation resources; and the optimization of the resources through monitoring and ensuring a appropriate data protection estrategy.

Plan Z: Functional migration

The last standing resource will be a migration at functional level. Timing depends on the functional complexity of every application.

Migrate from an comercial App/DB/OS to a GPL’s one: Migrate from a RedHat Enterprise Linux distro to an Canonical’s one to avoid get trapped into a maintenance subscription. Or migrate from a SAP CRM to Sugar CRM to avoid enterprise license renewal costs.

Migrate between different linux distro is not so risky. Almost any open-source app or database project that works into a comercial linux distro works into a GPL’s one. If you migrate a WebApp based on Tomcat/MySQL from RHEL to Ubuntu’s OS, then you will save important bucks regarding to ReHat maintenance subscriptions to Guests/Hosts OSs. This kind of change is fast and with guarantee of success if you have a talented team to support it. However, if there’s some vendor’s software to move between OSs, you need to check very carefully technical and license liabilities before anything.

Migrate data and processes at App level is struggling, expensive and risky. Review it very well before to go in. This option requires a functional team that understand your business processes. A valid option is start over into a new App and leave the old one just as reference. The drawback is you will have to keep the double management for years. This option will require an exhaustive review.

References worth to check out:

  • First Steps to Creating a Cloud Computing Strategy for 2013 at Forbes: A bit old and it’s still the reality for many companies on their need to evaluate their Cloud strategy into their IT budget. I like the “Cloud decision framework”. It’s a good starting point.
  • Virtualization and Cloud Migration: This the description of the MIgration services of a IT Professional Services Company. However, I like they way they describe them, the requirements, the challenge, and the planing that they sum into the picture that I’ve just stolen for this post.

Nuage Networks: Agility relies on the overlay

$
0
0

Cell phones are keeping network configurations among different zones. No matters where you are, or what antennas, routers or switches your data/voice is being transferred through; you will be connected to your loved ones for sure.

Your private cloud is distributed among different spots. Instances are moving among clusters or facilities driven by power/hardware failures, performance or proximity. Wouldn’t it be cool if instances’ network settings and rules could be taken wherever they are moved?

Reach out your instances at any point in the cloud. Powerful statement. A long-term vision about what network techs should be actually developing.

I’ve started at Nuage as Tech Business Dev for CALA (LATAM+Caribbean). The adoption of techs like SDN/NFV and CMSs (OpenStack/CloudStack) has just began in this part of the World. We’ve got a big opportunity. Sell a whole new concept of IT Services. Something that most of the customers are demanding today: agility.

OpenStack/OpenVSwitch Network Challenge: Why would I have to risk my career for being agile?

The management of thousands of tunnels over a limited underlay hardware shared-network-pipe is starting to corner Net Ops. How to control traffic to/from every instance? How to manage thousands of distributed ACLs? How early could you identify any traffic anomaly (noisy neighbors)? How fast could you fix it?

You want to stay pure in open-source? Neutron needs as much work as the complexity of your network APIs. Issues start coming up as soon as tenants start provisioning advanced network services like vrouters or load balancers. How do they scale? How reliable are they?

Instances of any kind must serve users. Users are at any inside/outside spot to your DC. Business Apps would be reached through the OVS they are connected to. Every OVS should help the rest of the world to find it. The fact is OVS can’t by itself. Neutron makes some part of this job. However, Neutron can’t directly speak to Core/Edge routers. It must relay on external featured pieces of hardware/software.

VTEP hardware encapsulate/decapsulate VxLAN at the service of APIs’ triggered commands (i.e. OVSDB). VTEP’s devices just need to bring a ML2 plug-in to help Neutron to do so. And now, OVS instances can skip Net nodes and communicate closer to the outside. However, the challenge is reduced on how they will take those packets out?

DC’s Core/Edge will route those packets to any required remote point. Most of the time, this point won’t be directly addressed. Destination could be immersed into other private space. How to be aware what is actually happening in there? How could we stay tune when remote instances are being created/terminated?

Would you call Net Ops team? They will kindly take care of this after some paper work. Touching ultra sensitive resources relying on thousands of configuration lines. Core systems route millions of packets that moves hundreds of critical services. How many times would they help you out to update just a couple of routes? How prompt would they finish that job?

If I were a Net Ops in charge of any DC core, I would be better delaying any change request. I would have to be 100% sure. Many meetings and validation will help to keep my neck safe. I don’t blame them. Their job is to keep things running and stable. Why would I have to risk my career for being agile?

Nuage brings a powerful and open overlay network option

Nuage is not the only option to manage and control the overlay network. Even you could exclusively do it on opensource. However, as I’ve showed in the previous section, there are many considerations.

Flexible and open: All actions could be applied through APIs. Most of the SDN/NFV vendors have constraints to support containers, or hyper-visors like KVM. Nuage even supports bare metals thru virtual or hardware gateways. Underlay switches are managed through an OVSDB standardized schema supported by vendors like Arista, HP and Cumulus Networks.

Nuage reduces the complexity of thousands of ACLs to just a bunch of templates. One security template could apply policies to an entire Layer-3 domain of subnets and instances. Even, you can re-use it to many mores domains. App Devs don’t need to be aware what IP-address space and network settings are assigned. No need to deal with security teams after the initial template setup. It speeds up any App deployment.

Nuage simplifies the addition of networks services. Its partner ecosystem probes it: F5, Palo Alto Networks, Fortinet are among these partners. Besides most have a developed plug-in for Neutron, some of them still need help to work with or to take more advantage of their features.

Routing/Switching will be spread among compute nodes. Tunnels are efficiently provisioned thanks to its advanced routing features: VPLS Back Bone. This will save important compute/network resources. Advantages that come along with a powerful IP routing tech: Alcatel-Lucent Service Routing Operating System (SROS). More than 300K implementations bring enough confidence to manage critical services through this powerful SDN controller.

A confidence that brings agility to Net services. A matured routing tech that automates changes avoiding unnecessary risk into DC Core systems. Once Net Ops set advanced routing configurations between edge routers and SDN through MP-BGP, we are all set. Instances/Subnets could be turned down/up or move among Datacenters, and communication still succeeds. Net Ops can now be agile and still keep their necks supporting their heads. Less closing-my-eyes-waiting-to-be-hit by a customer after a “write config” command, and more peace of mind.

See you soon!


Installing OpenStack Kilo (Red Hat OSP7) LBaaS with @NuageNetworks VSP 3.2R4 (HAProxy)

$
0
0

Hi there. We can find hundreds of posts regarding how to install OpenStack LBaaS. This case I’ll bring an step-by-step guide to implement LBaaS with Nuage VSP 3.2.R4 into OpenStack Kilo OSP7 (Red Hat). Kilo uses the LBaaS API v2.

I suggest you to get “VSP OpenStack Kilo Neutron Plugin User Guide (Release 3.2.R5 (Issue 2))”. Most of this post is based on this guide and the section “Using OpenStack LBaaS with the Nuage Neutron Plugin”.

I want to say thanks to Claire. She’s given me a HUGE support.

I’ve tested all these commands into our lab called Nuts. Hussein/Remi made a great job providing this amazing resource (Thanks guys). A tool that I’ve used with many customers to show how great is Nuage working with OpenStack. Check details about it below.

nuts lab description

 

Nuage Virtualized Services Directory (VSD) is the brain serving “as a policy, business logic and analytics engine “and could be 100% managed through Jason format APIs. Of course, it gives you a GUI that I’ll show shortly. VSC programs every network function as the Datacenter network control plane. More details in my previous post and also at Nuage.

A consolidated OpenStack Controller/Network node called os-controller with projects like Neutron, Keystone and Glance. Two Nova nodes with KVM and Nuage VRS (based on OpenVSwitch).

os-controller is already configured with Nuage plugin for neutron. /etc/neutron/neutron.conf file contains the line:
core_plugin = neutron.plugins.nuage.plugin.NuagePlugin

And /etc/neutron/plugin.ini should be like this:

 
default_net_partition_name = OpenStack_Nuage_Lab
server = 10.0.0.2:8443
serverauth = osadmin:osadmin

### Do not change the below options for standard installs
organization = csp
auth_resource = /me
serverssl = True
base_uri = /nuage/api/v3_2
cms_id = 540d931d-0585-4fce-8c3d-064fb7f357e0

installing plug-in on controller node

Let’s start installing python-neutron-lbaas on controller node:
[root@os-controller ~(kyst_adm)]# yum install python-neutron-lbaas

Update service providers section into /etc/neutron/neutron.conf (don’t use lbaas_agent.ini)


[ service_providers ]
service_provider=LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Add service plugin for LBaaS API v2 under default section:


[DEFAULT]
service_plugins=neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

Restart neutron service:
[root@os-controller ~(kyst_adm)]# systemctl restart neutron-server.service

Let’s go now with the HAProxy and Neutron node Nuage Plugin installation

Installing HAProxy at network node

Install HAProxy it’s simple, just run: [root@os-controller ~(kyst_adm)]# yum install haproxy
However, TCP ports 80 and 8080 are being used by other processes in our lab (use netstat -anp to check that). Then I’ve change the port to 5000 and then I restarted the service. HAProxy file is the following (/etc/haproxy/haproxy.cfg):


global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend  main *:5000
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    use_backend static          if url_static
    default_backend             app

backend static
    balance     roundrobin
    server      static 127.0.0.1:4331 check

backend app
    balance     roundrobin
    server  app1 127.0.0.1:5001 check
    server  app2 127.0.0.1:5002 check
    server  app3 127.0.0.1:5003 check
    server  app4 127.0.0.1:5004 check

Now, restart the service:systemctl restart haproxy.service

And check status of the service:



[root@os-controller etc]# service haproxy status
Redirecting to /bin/systemctl status  haproxy.service
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2015-12-21 13:58:19 PST; 5s ago
 Main PID: 13746 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─13746 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─13747 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─13748 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Dec 21 13:58:19 os-controller.novalocal systemd[1]: Started HAProxy Load Balancer.
Dec 21 13:58:19 os-controller.novalocal systemd[1]: Starting HAProxy Load Balancer...
Dec 21 13:58:19 os-controller.novalocal haproxy-systemd-wrapper[13746]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/...pid -Ds
Hint: Some lines were ellipsized, use -l to show in full.

It’s time to install LBaaS plugin on the network node

Installing LBaaSv2 Plugin at Network node

Ok, just a reminder that you are already into an OpenStack instance with Nuage plugin perfectly working. If, this is not the case, you will have to install the Nuage plugin for neutron before go further.

We need to install VRS into our network node. VRS will be in charge to manage communication between Computes node and the LBaaS.

Installing VRS service into Network node

This case we are going to follow instructions from “VSP Install Guide Release 3.2R4” in the section “VRS AND VRS-G SOFTWARE INSTALLATION ON REDHAT AND UBUNTU”. This is a Linux Red Hat v7, then, we’ll follow the guidelines for this linux distro and version.

You will need the Nuage-VRS-3.2.4-133-el7.tar.gz file for later. Connect to support.alcatel-lucent.com and get it.

Let’s enable EPEL repository as our first action:rpm -Uvh http://mirror.pnl.gov/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Now, let enable the following repo at /etc/yum.repos.d/redhat.repo into the following section:


[rhel-7-server-optional-rpms]
metadata_expire = 86400
sslclientcert = /etc/pki/entitlement/7395579051263769833.pem
baseurl = https://cdn.redhat.com/content/dist/rhel/server/7/$releasever/$basearch/optional/os
ui_repoid_vars = releasever basearch
sslverify = 1
name = Red Hat Enterprise Linux 7 Server - Optional (RPMs)
sslclientkey = /etc/pki/entitlement/7395579051263769833-key.pem
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
enabled = 1
sslcacert = /etc/rhsm/ca/redhat-uep.pem
gpgcheck = 1

Now, we have to run yum update
It’s time to go for cup of coffee. It’s going to take some time.

Install the following dependencies:


yum install libvirt
yum install python-twisted-core
yum install perl-JSON
yum install qemu-kvm
yum install vconfig

Let’s install our VRS packages that we’ve just got.


tar zxvf Nuage-VRS-3.2.4-133-el7.tar.gz
yum localinstall nuage-openvswitch-3.2.4-133.el7.x86_64.rpm
yum localinstall nuage-openvswitch-dkms-3.2.4-133.el7.x86_64.rpm

Now, let add the personality to our /etc/default/openvswitch file and the IP of the controller. The file states like this:



PERSONALITY=vrs
UUID=
CPE_ID=
DATAPATH_ID=
UPLINK_ID=
NETWORK_UPLINK_INTF=
NETWORK_NAMESPACE=
PLATFORM="kvm"
DEFAULT_BRIDGE=alubr0
GW_HB_BRIDGE=
GW_HB_VLAN=4094
GW_HB_TIMEOUT=2000
MGMT_ETH=
UPLINK_ETH=
GW_PEER_DATAPATH_ID=
GW_ROLE="backup"
CONN_TYPE=tcp

ACTIVE_CONTROLLER=10.0.0.3
SKB_LRO_MOD_ENABLED=no
DEFAULT_LOG_LEVEL=

Now, we need to take care of selinux or our openvswitch will failed. you have to either disable selinux or set it to permissive. You can just use the cli here and setenforce 0 and change the file /etc/selinux/config just in case of any later reboot. Use the command getenforce to check if the status is “Permissive”

Let’s restart openvswitch doing systemctl restart openvswitch.service

Now, let’s check if the service is working properly:

[root@os-controller ~(kyst_adm)]# ovs-vsctl show
4af4f578-7fbf-407c-b04a-8f00336421b1
    Bridge "alubr0"
        Controller "ctrl1"
            target: "tcp:10.0.0.3:6633"
            role: master
            is_connected: true
        Port "alubr0"
            Interface "alubr0"
                type: internal
        Port "svc-rl-tap1"
            Interface "svc-rl-tap1"
        Port "svc-rl-tap2"
            Interface "svc-rl-tap2"
        Port svc-pat-tap
            Interface svc-pat-tap
                type: internal
    ovs_version: "3.2.4-133-nuage"

Now we are ready to resume our plugin installation

Back again to install LBaaS v2 plugin on network node

Let’s add the following line to our /etc/neutron/neutron.conf file under the default section:


[DEFAULT]
ovs_integration_bridge = alubr0

Then /etc/neutron/neutron.conf will be as the following:


[DEFAULT]
service_plugins=neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
ovs_integration_bridge = alubr0
verbose = True
router_distributed = False
debug = False
state_path = /var/lib/neutron
use_syslog = False
log_dir =/var/log/neutron
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = neutron.plugins.nuage.plugin.NuagePlugin
auth_strategy = keystone
base_mac = fa:16:3e:00:00:00
mac_generation_retries = 16
dhcp_lease_duration = 86400
dhcp_agent_notification = True
allow_bulk = True
allow_pagination = False
allow_sorting = False
allow_overlapping_ips = True
agent_down_time = 75
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
allow_automatic_l3agent_failover = False
dhcp_agents_per_network = 1
l3_ha = False
api_workers = 4
rpc_workers = 4
use_ssl = False
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.0.0.10:8774/v2
nova_region_name =RegionOne
nova_admin_username =nova
nova_admin_tenant_id =f33c6e3b0519478ab6e55fef9a1a3d1c
nova_admin_password =56415bf8a5444bb6
nova_admin_auth_url =http://10.0.0.10:5000/v2.0
send_events_interval = 2
rpc_backend=neutron.openstack.common.rpc.impl_kombu
control_exchange=neutron
lock_path=/var/lib/neutron/lock


[matchmaker_redis]

[matchmaker_ring]

[quotas]

[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
report_interval = 30

[keystone_authtoken]
auth_uri = http://10.0.0.10:5000/v2.0
identity_uri = http://10.0.0.10:35357
admin_tenant_name = services
admin_user = neutron
admin_password = 3045b48a69f340b0

[database]
connection = mysql://neutron:92ed70427a014077@10.0.0.10/neutron
max_retries = 10
retry_interval = 10
min_pool_size = 1
max_pool_size = 10
idle_timeout = 3600
max_overflow = 20

[nova]

[oslo_concurrency]

[oslo_policy]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay = 1.0
rabbit_host = 10.0.0.10
rabbit_port = 5672
rabbit_hosts = 10.0.0.10:5672
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_ha_queues = False

[service_providers]
service_provider=LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default


Let’s configure the /etc/neutron/lbaas_agent.ini file as the following:


[DEFAULT]
ovs_use_veth=False
interface_driver=nuage_neutron.lbaas.agent.nuage_interface.NuageInterfaceDriver

[haproxy]

Finally, let’s start or LBaaS agent doing systemctl start neutron-lbaasv2-agent and we are done

We can start adding new load balancers at any moment

Playing with LBaaS

Sadly horizon doesn’t support all panels for LBaaSv2, then you will have to use Neutron APIs instead (please, don’t blame Nuage or me for that). Liberty is solving this out anyway (I didn’t test it yet). I suggest you to start through command-line as I’m showing in the following lines:


[root@os-controller ~(kyst_adm)]# neutron net-list
+--------------------------------------+------------------+------------------------------------------------------+
| id                                   | name             | subnets                                              |
+--------------------------------------+------------------+------------------------------------------------------+
| 24b003ec-d666-4814-9c55-5cb14d65a065 | adm.priv2        | f5944244-4e12-4c8a-a748-0326e8a015e8 192.168.52.0/24 |
| 2f61f543-214f-462f-afb7-182ec816abe9 | external_network | 8f73aa92-e8af-454b-bffe-55c72257453b 10.0.1.0/24     |
| 562972a3-3403-49a3-87aa-d2c9a714a0fd | adm.priv4        | c317c461-7da7-45b9-b1f0-ce45f0acfafa 192.168.54.0/24 |
| 7080b26f-e556-4207-8c5a-e403865dcc30 | adm.priv1        | f3355820-69bc-40c6-bfe2-e6c07df24d30 192.168.51.0/24 |
| b1a4897a-d6e8-4a0f-ae13-41a6bc40cea5 | private          | 45916c43-0f29-48bf-9fdd-332a2c99be5f 172.16.1.0/24   |
| b3631409-eace-4ae1-81b4-499fb0ce3104 | adm.priv3        | a7304423-2193-4f0c-8e95-9868cc329698 192.168.53.0/24 |
| eb0b7fc6-efd7-469d-9b6d-e0188719f5b1 | t-system01       | ff71594b-1e4e-4fdb-ac79-e71cf444bac2 169.87.23.0/24  |
+--------------------------------------+------------------+------------------------------------------------------+

[root@os-controller ~(kyst_adm)]# neutron lbaas-loadbalancer-create --name lb3 45916c43-0f29-48bf-9fdd-332a2c99be5f
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | c5e548b9-6936-435d-b468-0aa4b9fcd08a |
| listeners           |                                      |
| name                | lb3                                  |
| operating_status    | ONLINE                               |
| provider            | haproxy                              |
| provisioning_status | ACTIVE                               |
| tenant_id           | 63d41744393243b6a51a95c6063fe4c1     |
| vip_address         | 172.16.1.4                           |
| vip_port_id         | b13807f4-371d-4df4-9e70-6c4db70e6f49 |
| vip_subnet_id       | 45916c43-0f29-48bf-9fdd-332a2c99be5f |
+---------------------+--------------------------------------+

If you want to see these load balancers properly from the nuage console. You will have to create a listener as the the following:


[root@os-controller ~(kyst_adm)]# neutron lbaas-loadbalancer-list
+--------------------------------------+------+---------------+---------------------+----------+
| id                                   | name | vip_address   | provisioning_status | provider |
+--------------------------------------+------+---------------+---------------------+----------+
| a986bead-2fe5-4f53-a607-0c197565a1b3 | lb1  | 192.168.51.14 | ACTIVE              | haproxy  |
| b1bd8993-acc7-484d-ba93-b5ce185510b4 | lb0  | 192.168.51.13 | ACTIVE              | haproxy  |
| c5e548b9-6936-435d-b468-0aa4b9fcd08a | lb3  | 172.16.1.4    | ACTIVE              | haproxy  |
+--------------------------------------+------+---------------+---------------------+----------+
[root@os-controller ~(kyst_adm)]# neutron lbaas-listener-create --loadbalancer lb3 --protocol HTTP --protocol-port 80 --name listernerlb3
Created a new listener:
+--------------------------+------------------------------------------------+
| Field                    | Value                                          |
+--------------------------+------------------------------------------------+
| admin_state_up           | True                                           |
| connection_limit         | -1                                             |
| default_pool_id          |                                                |
| default_tls_container_id |                                                |
| description              |                                                |
| id                       | d0fb168b-008b-44b8-9bbc-b59d4ada021e           |
| loadbalancers            | {"id": "c5e548b9-6936-435d-b468-0aa4b9fcd08a"} |
| name                     | listernerlb3                                   |
| protocol                 | HTTP                                           |
| protocol_port            | 80                                             |
| sni_container_ids        |                                                |
| tenant_id                | 63d41744393243b6a51a95c6063fe4c1               |
+--------------------------+------------------------------------------------+
[root@os-controller ~(kyst_adm)]# neutron lbaas-listener-list
+--------------------------------------+-----------------+--------------+----------+---------------+----------------+
| id                                   | default_pool_id | name         | protocol | protocol_port | admin_state_up |
+--------------------------------------+-----------------+--------------+----------+---------------+----------------+
| d0fb168b-008b-44b8-9bbc-b59d4ada021e |                 | listernerlb3 | HTTP     |            80 | True           |
| b5c02849-a247-48ad-909d-cccbcbe4b367 |                 | listernerlb0 | HTTP     |            80 | True           |
| 0c061dcf-006f-4283-a88c-c14ce2f0096a |                 | listernerlb1 | HTTP     |            80 | True           |
+--------------------------------------+-----------------+--------------+----------+---------------+----------------+

This will come up into the VSD console as the next picture:

nuage networks sdn plugin lbaas haproxy kilo red hat openstack neutron

Check the namespaces that you’ve just created:


[root@os-controller ~(kyst_adm)]# ip netns list
qlbaas-c5e548b9-6936-435d-b468-0aa4b9fcd08a
qlbaas-a986bead-2fe5-4f53-a607-0c197565a1b3
qlbaas-b1bd8993-acc7-484d-ba93-b5ce185510b4

Let’s create now a pool


[root@os-controller ~(kyst_adm)]# neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listernerlb3 --protocol HTTP --name pool1
Created a new pool:
+---------------------+------------------------------------------------+
| Field               | Value                                          |
+---------------------+------------------------------------------------+
| admin_state_up      | True                                           |
| description         |                                                |
| healthmonitor_id    |                                                |
| id                  | 4fc8f356-01bf-4aa2-8fcb-afa5b49d8ef3           |
| lb_algorithm        | ROUND_ROBIN                                    |
| listeners           | {"id": "d0fb168b-008b-44b8-9bbc-b59d4ada021e"} |
| members             |                                                |
| name                | pool1                                          |
| protocol            | HTTP                                           |
| session_persistence |                                                |
| tenant_id           | 63d41744393243b6a51a95c6063fe4c1               |
+---------------------+------------------------------------------------+

Now I will add a couple of servers from different subnets (why not?)


[root@os-controller ~(kyst_adm)]# nova list
+--------------------------------------+--------------------+--------+------------+-------------+----------------------------------+
| ID                                   | Name               | Status | Task State | Power State | Networks                         |
+--------------------------------------+--------------------+--------+------------+-------------+----------------------------------+
| 7fc236ab-f43e-418e-b44a-f40da53a8256 | adm.priv1.inst_fip | ACTIVE | -          | Running     | adm.priv1=192.168.51.2, 10.0.1.7 |
| ff4c0705-73bc-467b-bbbf-f16a6795a53a | adm.priv2.inst_fip | ACTIVE | -          | Running     | adm.priv2=192.168.52.2, 10.0.1.5 |
| aa189578-28c6-4e97-bf4f-a432cd62c0a9 | adm.priv3.inst_fip | ACTIVE | -          | Running     | adm.priv3=192.168.53.2, 10.0.1.8 |
| 598e3ce8-aea1-4d74-aa88-6a94a7cb668d | adm.priv4.inst_fip | ACTIVE | -          | Running     | adm.priv4=192.168.54.2, 10.0.1.6 |
| 642dd34b-ddc5-4c38-a3bd-9697ee9ca81f | test01             | ACTIVE | -          | Running     | private=172.16.1.3, 10.0.1.4     |
| eb4602cd-8614-4ccb-96d2-23dbc2bde2d7 | tsystems01         | ACTIVE | -          | Running     | t-system01=169.87.23.2           |
+--------------------------------------+--------------------+--------+------------+-------------+----------------------------------+
[root@os-controller ~(kyst_adm)]# neutron lbaas-member-create --subnet adm.priv1 --address 192.168.51.2 --protocol-port 80 pool1
Created a new member:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| address        | 192.168.51.2                         |
| admin_state_up | True                                 |
| id             | 0f172e78-02f3-4046-8b16-9670b4d3bbb4 |
| protocol_port  | 80                                   |
| subnet_id      | f3355820-69bc-40c6-bfe2-e6c07df24d30 |
| tenant_id      | 63d41744393243b6a51a95c6063fe4c1     |
| weight         | 1                                    |
+----------------+--------------------------------------+
[root@os-controller ~(kyst_adm)]# neutron lbaas-member-create --subnet adm.priv2 --address 192.168.52.2 --protocol-port 80 pool1
Created a new member:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| address        | 192.168.52.2                         |
| admin_state_up | True                                 |
| id             | 6f124fce-f44e-45d0-b49e-69cddb93f894 |
| protocol_port  | 80                                   |
| subnet_id      | f5944244-4e12-4c8a-a748-0326e8a015e8 |
| tenant_id      | 63d41744393243b6a51a95c6063fe4c1     |
| weight         | 1                                    |
+----------------+--------------------------------------+

Let’s try if our load balancer is working out. I will create a index.html file with the content “I am into server ONE!” and a HTTP server into the pool’s member 192.168.51.2. Now I’ll try to access this from the load balancer 172.16.1.4.


[centos@adm ~]$ ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.51.2  netmask 255.255.255.0  broadcast 192.168.51.255
        inet6 fe80::f816:3eff:fe6b:db0b  prefixlen 64  scopeid 0x20
        ether fa:16:3e:6b:db:0b  txqueuelen 1000  (Ethernet)
        RX packets 10442  bytes 7627092 (7.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7828  bytes 657499 (642.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 21498  bytes 1868738 (1.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21498  bytes 1868738 (1.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[centos@adm ~]$ cat index.html 
I am into server ONE!
[centos@adm ~]$ sudo python -m SimpleHTTPServer 80 &
[1] 21508
[centos@adm ~]$ Serving HTTP on 0.0.0.0 port 80 ...

[centos@adm ~]$ telnet 172.16.1.4 80
Trying 172.16.1.4...
Connected to 172.16.1.4.
Escape character is '^]'.
GET /index.html
172.16.1.4 - - [22/Dec/2015 16:35:50] "GET /index.html HTTP/1.0" 200 -
HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/2.7.5
Date: Tue, 22 Dec 2015 16:35:50 GMT
Content-type: text/html
Content-Length: 22
Last-Modified: Tue, 22 Dec 2015 16:35:04 GMT

I am into server ONE!
Connection closed by foreign host.
[centos@adm ~]$ 

 

Well, and we’re done!
See you soon!


Some Nuage Labs’ resources for NUTS

$
0
0

Hi there. I am just taking some tools over here, which I am normally using with Nuage’s labs (template: “Nuage VSP 3.2R4 with Red Hat OSP7 – blank”). More of these are being tested at Nuts ( A limited access labs we normally use to amaze customers). However, you can use them and modify them for your own purpose.

Use these scripts at your own risk. If you don’t know what they do, don’t use them (don’t make me say you later: “I told you!”)

neutron-lbaasv2-agent

A script file has been created to easily setup all what I explain in my previous post “INSTALLING OPENSTACK KILO (RED HAT OSP7) LBAAS WITH @NUAGENETWORKS VSP 3.2R4 (HAPROXY)”. It’s funny to see something that took me days is done in just 15min now.

It requires some files related to neutron, openvswitch and lbaas-agent configurations. Also, you will have to download VRS setup files from ALU’s support site: Nuage-VRS-3.2.4-133-el7.tar.gz.

All the files can be downloaded from my Bitbucket’s repo nuage-nuts-lbaas-install or download tar.gz file

I ‘ve added some additional steps to my previous posts. An update of our neutron’s nuage part from v3.2R4 to R5. Files must be downloaded from our support site:

  • nuagenetlib-2015.1.3.2.6_198-nuage.noarch.rpm
  • nuage-openstack-neutron-2015.1.1785-nuage.noarch.rpm
  • nuage-openstack-neutronclient-2015.1.1785-nuage.noarch.rpm

Get Nuage VSD domain’s details from command-line

A python small app that I built from some examples at Philippe Dellaert’s repo at Github. I personally hate to switch over to GUI many times to just get a couple of values. This app helps you to get details about your L3 domains, subnets and instances. You will also get expiration date of your Nuage VSP license.

Before to run any of these applications you have to install the following packages. (if you’ve installed neutron-lbaasv2-agent as I’ve showed in the previous section, you don’t need to add more repos to yum):


yum -y install python-pip
pip install bamboo
pip install vspk

Download list-domains-enterprise.py from here, and you’re done!

 
[root@os-controller python-files(kyst_adm)]# python list-domains-enterprise.py

License expiration date: 2016-12-31 15:59:59

Domains inside Enterprise OpenStack_Nuage_Lab
|- Domain: d24798fb-173d-483b-a6c8-c0949992584b
    |- Zone: def_zone-f4eac814-7543-4b5b-878a-cc95169d9762
        |- Subnets: 0240310e-d0da-4b78-9d50-fe67354123ac - 192.168.51.0 - 255.255.255.0
            |- Instance: instance-00000009
        |- Subnets: 02814ccd-e9ce-4415-9814-c0dcb71ec0f1 - 192.168.53.0 - 255.255.255.0
        |- Subnets: 1eaa4236-9c3a-4a83-9234-e5386fbeebf6 - 192.168.52.0 - 255.255.255.0
        |- Subnets: 45916c43-0f29-48bf-9fdd-332a2c99be5f - 172.16.1.0 - 255.255.255.0
        |- Subnets: d745c011-0573-4c00-b805-63d10dd397c3 - 192.168.54.0 - 255.255.255.0
    |- Zone: def_zone-pub-f4eac814-7543-4b5b-878a-cc95169d9762
--------------------------------------------------------------------------------

Source your OpenStack credentials

Source OS credentials will save you time to manage your resources thru CLI. It’s a trivial thing. However, if you didn’t know it. I’m taking you my personal file (admin.source):


export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=sag81-sled
export OS_AUTH_URL=http://10.0.0.10:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(kyst_adm)]\$ '

Create this file and use it as the following:


[root@os-controller ~]# neutron --os-username admin --os-password sag81-sled --os-tenant-name admin --os-auth-url http://10.0.0.10:5000/v2.0/ net-list 
+--------------------------------------+------------------+----------------------------------------------------+
| id                                   | name             | subnets                                            |
+--------------------------------------+------------------+----------------------------------------------------+
| b1a4897a-d6e8-4a0f-ae13-41a6bc40cea5 | private          | 45916c43-0f29-48bf-9fdd-332a2c99be5f 172.16.1.0/24 |
| 2f61f543-214f-462f-afb7-182ec816abe9 | external_network | 8f73aa92-e8af-454b-bffe-55c72257453b 10.0.1.0/24   |
+--------------------------------------+------------------+----------------------------------------------------+

[root@os-controller ~]# source admin.source 
[root@os-controller ~(kyst_adm)]# neutron net-list
+--------------------------------------+------------------+----------------------------------------------------+
| id                                   | name             | subnets                                            |
+--------------------------------------+------------------+----------------------------------------------------+
| b1a4897a-d6e8-4a0f-ae13-41a6bc40cea5 | private          | 45916c43-0f29-48bf-9fdd-332a2c99be5f 172.16.1.0/24 |
| 2f61f543-214f-462f-afb7-182ec816abe9 | external_network | 8f73aa92-e8af-454b-bffe-55c72257453b 10.0.1.0/24   |
+--------------------------------------+------------------+----------------------------------------------------+

Populate your admin tenant

I did this script (add-things-to-admin.py) as soon as create my second lab at Nuts. I didn’t want to create instances and networks manually every time. That helps me to save time and start showing the awesomeness of Nuage just some minutes after I’ve got the lab running.

The app isn’t perfect. You can take it from where I left it. You can argue the same thing could be done thru heat. In fact, I have some nice yaml files that I will gather and share later.

Anyway, when you run the script should show the following (don’t forget to get ssh access from security groups or VSD if you want to access any instance by its floating IP)


[root@os-controller python-files(kyst_adm)]# python nuts.adm.v3.py 
Creating keypair: mykey...
mykey done
Network b9e6f9a3-fba4-4d3f-8b37-8e0c4d6e8178 created
Sub-Network e83eab12-a231-4d4a-a334-9fded03052f5 created
Port {u'subnet_id': u'e83eab12-a231-4d4a-a334-9fded03052f5', u'tenant_id': u'63d41744393243b6a51a95c6063fe4c1', u'subnet_ids': [u'e83eab12-a231-4d4a-a334-9fded03052f5'], u'port_id': u'e7172f8f-0458-49e8-aa1c-a27a70bcc006', u'id': u'd24798fb-173d-483b-a6c8-c0949992584b'} created
Network ff207aff-08b7-40a9-9ce0-1b03fda1b1f9 created
Sub-Network 5ced7285-974c-4a1e-83c7-8f8c809a1de4 created
Port {u'subnet_id': u'5ced7285-974c-4a1e-83c7-8f8c809a1de4', u'tenant_id': u'63d41744393243b6a51a95c6063fe4c1', u'subnet_ids': [u'5ced7285-974c-4a1e-83c7-8f8c809a1de4'], u'port_id': u'149b327f-396b-4825-838f-a94f60fdd3bb', u'id': u'd24798fb-173d-483b-a6c8-c0949992584b'} created
Network 7e643f6d-1979-4b1a-aae0-f5330dc791cc created
Sub-Network da6afb58-2d40-4572-b0e0-60a0a828d836 created
Port {u'subnet_id': u'da6afb58-2d40-4572-b0e0-60a0a828d836', u'tenant_id': u'63d41744393243b6a51a95c6063fe4c1', u'subnet_ids': [u'da6afb58-2d40-4572-b0e0-60a0a828d836'], u'port_id': u'7c5cc123-dd64-486c-95ac-81563edec87e', u'id': u'd24798fb-173d-483b-a6c8-c0949992584b'} created
Network 98d5b2ad-c8c0-4558-bdc2-617d4ad2fffa created
Sub-Network 9ceb8391-e525-4c30-a0bf-d4551e77814f created
Port {u'subnet_id': u'9ceb8391-e525-4c30-a0bf-d4551e77814f', u'tenant_id': u'63d41744393243b6a51a95c6063fe4c1', u'subnet_ids': [u'9ceb8391-e525-4c30-a0bf-d4551e77814f'], u'port_id': u'2e759115-c594-4196-8059-e27dde410395', u'id': u'd24798fb-173d-483b-a6c8-c0949992584b'} created
Port cb285691-2681-4911-86fe-e413d8d7d0a3 created
Booting instance...Creating floating ip...Port abb19e89-9cf7-4113-8646-30e9e4c64ee0 created
Booting instance...Creating floating ip...Port f546cecb-ba5f-4c6b-a368-105257754fdc created
Booting instance...Creating floating ip...Port da35424f-3e59-4e87-a961-f94c0022e43b created
Booting instance...Creating floating ip...done
[root@os-controller python-files(kyst_adm)]# ping 10.0.1.5
PING 10.0.1.5 (10.0.1.5) 56(84) bytes of data.
64 bytes from 10.0.1.5: icmp_seq=1 ttl=61 time=4.61 ms
64 bytes from 10.0.1.5: icmp_seq=2 ttl=61 time=1.24 ms
64 bytes from 10.0.1.5: icmp_seq=3 ttl=61 time=1.42 ms

The OS image is got from internet (http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1508.qcow2), and also nothing more is required for this python app than the basic stuff that is already loaded into the controller. Just be aware the VSD license is not expired. Once you run this app, you can use os-controller’s root private key to access any server thru their floating IPs (starting at 10.0.1.4 is you don’t use it before).

You will get four OpenStack managed private networks/subnets already connected to the router. All components can be managed by neutron after.

nuage openstack nuts python script

Enjoy and see you next time!



Sentinel.la | LATAM startups play global | OpenStack Monitoring & Healthcheck

$
0
0

Sentinel.la is not just another startup supported on the open source community mattress. It’s a live proof that LATAM’s talent is playing global. Founders Memo and Paco have been closed to OpenStack foundation over the last four years. They have a HUGE experience operating it. Sentinel.la will save some suffering to get into this amazing openstack connected world.

Sentinel.la has launched its beta last week. You would identify the value of the solution just seeing their posts and videos. I will point out some of the key bullets of their offering in the following lines.

One step ahead of users

OpenStack is an amazing platform. However, most of the users are just starting with this bunch of projects.

Liberty is offering a really nice dashboard experience. Horizon took what is using to display orchestration ongoing process (heat) and reproduce it into the “network topology”. Just let the next picture get you hooked.

networktopology sentinel.la openstack healthcare monitoring nova neutron liberty

source: https://www.openstack.org/assets/software/liberty/networktopology.png

Most of the tenant users don’t have enough insight in openstack to understand simple issues like: the size of the server flavor doesn’t big enough to fit the selected image (check next image)

sentinel.la openstack healthcare monitoring nova neutron

source: sentinel.la video https://vimeo.com/154817235

sentinel.la openstack healthcare monitoring nova neutron 02.png

source: sentinel.la video https://vimeo.com/154817235

Those events can be quickly managed. Even before users start to call Help Desk complaining about it. Understand what is happening with your users is priceless. Install OpenStack is so complex, even for experts. Imagine how hard could be operate it. Get eyes on what is happening behind will make your service more responsive and agile. If you don’t believe me, just check the next over the last openstack summit in Tokio: No Valid Host Was Found: Translating Tracebacks by Rackspace (James Denton, Wade Lewis, Sam Yaple). And tell how many guys in your team can traceback an issue like this.

https://www.openstack.org/summit/tokyo-2015/videos/presentation/no-valid-host-was-found-translating-tracebacks

These guys state, “Deciphering a traceback is a bit like reading the matrix”. Good way to size the challenge that means deal with it. This example shows an error message: “No valid host was found. There are not enough hosts available”. What that means? That means you need to dig deep among different openstack’s logs to get the root source of the problem. Sentinel.la server’s view bring tools to check log’s messages out from different services into only one panel.

Check Paco’s post: Mastering the Openstack logs

Geographically distributed monitoring

I love this dashboard! Know what is happening to all your sites at a glance. Features that I can point out:

  • Push notifications that keep aware of any error from any server.
  • Showing the last alerts that you get from all your servers.
  • Arrange your servers into different clouds and OpenStack versions and see them how they are display in a map.
  • Showing global counters of your availability and services.

Also, there is a server view that helps you to dig deep into logs and performance. Love the way services are classified into different OpenStack projects. Features to point out about this view:

  • Last alerts panel and services status are really useful to dig into server’s issues.
  • Search into the log’s events using keyword and correlate
  • A snapshot on the services that are running at every server.

sentinel.la-openstack-monitoring-healthcheck-service-nova-neutron-heat-cinder-ceilometer-monasca.000

Unlimited scalability and agility

On-demand resources will help your business scales forever. Sentinel.la doesn’t own any piece of infrastructure. 100% of their business is on the cloud. I am not talking only about compute. I am sure it took some time to study the best of the art regarding databases, platforms, agents and try to move all on a PaaS strategy.

Starting from a agent based on open source project tourbillon.  Using InfluxDB to master metric storage management. Leverage their scalability and costs through PaaS offerings for MQSeries and MongoDB.

Check the following posts at their site:

  • JSON Web Tokens for dummies: If you bring service on the cloud, you have to bring confidence to users this service is safe. I think JWT has been used perfectly in this case to ensure data is kept secure like your identity
  • OpenStack services on a Time-Series database: A post that describes why is very important to choose specific database tech to manage time-series data. I would think NoSQL could be a nice resource that. Well Paco shows why NoSQL isn’t good fit for this case.  InfluxDB makes the perfect match in this case. Something nice that’s been copied from the OpenStack Monasca project architecture.

Final Words

A big great step has been to launch the beta version last week. There are many things to improve yet: Make agent identify openstack services by itself, extend capabilities to other projects like keystone, cinder besides nova and neutron. No doubts you would make those ones up. You are on the right way to succeed for sure.

Paco/Memo, congrats for the courage to do something different. I hope more people take this as an example in LATAM to dream BIG!

See you!


Building a Nuage/OpenStack demo at home – Part1

$
0
0

NOTE: I’ve done some important changes over the next post. Like switching desvtack over packstack. Anyway, you are invited to check out both and create your own opinion.

Next posts will take you to a step-by-step guide to create you on-premises Proof of Concept of Nuage 3.2R6 and OpenStack Liberty. I’m considering installing this demo in just one server.

Next picture shows you the components I’m considering for this demo:

nuage demo devstack pinrojas 01

 

This will help you to understand how Nuage works with OpenStack. You will be able to try different use cases like: forwarding policies (chaining), ACLs, Manage L3/L2 Domains, Create a LBaaS based on haproxy, etc…

Minimal Capacity Requirements

In order to try some of the mentioned use cases, we need a minimal capacity requirements:

1.- VSD requires at least 8G memory and 100G disk for a demo (24G is the required on production). 3 instances is needed for High Availability. I’ve tried 4G into my laptop. However, services takes a long time of your life to set-up.

2.- VSC requires 4GB of memory (You need at least x2 on production)

3.- Jumpbox requires a minimal of 2G of memory

4.- OpenStack controller and computes depends on what do you want to do. Controller Nodes can fit in 4GB memory perfectly. Computes depends on how many memory you will provide to your instances. My case I will consider 5GB to have a minimal o 3 instances to play around.

Now, If we consider to install everything in one just server using devstack. And we add 4GB to support QEMU and a few of the projects (neutron, nova, keystone, glance). We do need at least 27GB memory in just one server. Let’s make it 32GB (just in case you’ve motivated to install also a VRS-G or an additional compute node)

Creating Base Image with DevStack

Ok guys, we need to create our small private stack to build our demo. At first I was thinking just to use KVM, but what the hell! Let’s do it with DevStack.

My suggestion is to run this OS base for DevStack in a physical server with al least 32 GB of memory, 4CPUs, 2 NICs and 300GB of Disk. Otherwise, you can download my Virtualbox ubuntu image that contains my DevStack downloaded and ready to install via ./stack.sh.

We’ll go thru the process to install DevStack in a server. Most of the info I’ll show you is on a VirtualBox’s venture. However, you can use the same procedure to implement that on your own. Actually I buying a fanless box in CappuccinoPC and disks/memory from Amazon (I’ve got this configuration from Jerrod). You can also see Diego’s option.

Meantime I’ll follow next with my VBox configuration.

Creating you demo in VirtualBox

VBox: Preparing interfaces

I have to define two network adapters based on NAT  in my case (I am running this on my laptop and I need to take it with me over my trips, ok?). If your server will stay connected to your home network, you would better define both as bridged. Reserve a pool of bunch of IP addresses out of your DHCP’s one. Four IPs would be enough.

Then, you will have to  define forwarding rules to connect to your VM as the following picture

virtualbox forwarding rule devstack liberty nuage demo pinrojas

And connect your instance doing a something such as “ssh -l ubuntu -p 2222 127.0.0.1”

The second Interface It would be nice to create ssh access to your jumbox at 2223:

nuage demo devstack virtualbox jumbox.png

After you created your jumbox server you will be able to connect via “ssh -l ubuntu -p 2223 127.0.0.1″. Don’t forget to add your public-key among others into authorized_keys at jumbox server.

VBox: Creating and preparing your server

I am using VirtualBox 4.2.34 and I am using ubuntu-14.04-server-amd64.ova and download it from http://virtualboxes.org/images/ubuntu-server/ to my laptop. BTW, It’s a Mac with a 16GB RAM ☺.

I’ve changed the memory configuration of the OVA to 12GB memory and 4 CPUs. Remove useless things like USB ports.

As soon as you have your brand new Ubuntu running, I suggest you to upgrade and reboot after:


sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
reboot

Now Let’s make more disk space into our brand new server. First add a disk as the following picture. I’ve defined 200GB for glance cache and nova images (/opt/stack).

virtualbox disk devstack liberty nuage demo pinrojas

After create these virtual devices now we have to partition and mount as the following way:


ubuntu@ubuntu-amd64:/var/lib$ sudo fdisk -l

Disk /dev/sda: 19.3 GB, 19327352832 bytes
255 heads, 63 sectors/track, 2349 cylinders, total 37748736 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c95b1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    36702207    18350080   83  Linux
/dev/sda2        36704254    37746687      521217    5  Extended
/dev/sda5        36704256    37746687      521216   82  Linux swap / Solaris

Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
ubuntu@ubuntu-amd64:/var/lib$ sudo fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xa3859b8e.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-419430399, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-419430399, default 419430399):
Using default value 419430399

Command (m for help): p

Disk /dev/sdb: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8c9832c1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048   419430399   209714176   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
ubuntu@ubuntu-amd64:/var/lib$ sudo mkfs -t ext4 /dev/sdb1
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
13107200 inodes, 52428544 blocks
2621427 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1600 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

ubuntu@ubuntu-amd64:/var/lib$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            5.9G  4.0K  5.9G   1% /dev
tmpfs           1.2G  420K  1.2G   1% /run
/dev/sda1        18G  3.1G   14G  20% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none            5.9G     0  5.9G   0% /run/shm
none            100M     0  100M   0% /run/user
ubuntu@ubuntu-amd64:/var/lib$ sudo mkdir /opt/stack
ubuntu@ubuntu-amd64:/var/lib$ sudo mount /dev/sdb1 /opt/stack
ubuntu@ubuntu-amd64:/var/lib$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            5.9G  4.0K  5.9G   1% /dev
tmpfs           1.2G  420K  1.2G   1% /run
/dev/sda1        18G  3.1G   14G  20% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none            5.9G     0  5.9G   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/sdb1       197G   52M  197G   1% /opt/stack
ubuntu@ubuntu-amd64:~$ sudo tune2fs -m 0 /dev/sdb1
tune2fs 1.42.9 (4-Feb-2014)
Setting reserved blocks percentage to 0% (0 blocks)
ubuntu@ubuntu-amd64:~$ sudo tune2fs -o journal_data_writeback /dev/sdb1
tune2fs 1.42.9 (4-Feb-2014)

Remember add the following lines to your /etc/fstab and reboot:


# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
# / was on /dev/sda1 during installation
UUID=7d4c7424-d351-4b70-8bc2-37f5e37d778b /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=8ceef34f-b47a-4ce3-b3b4-5d93f24667e6 none            swap    sw              0       0
/dev/sdb1 /opt/stack ext4 noatime,nodiratime,data=writeback,barrier=0,nobh,errors=remount-ro 0 1

OS Base DevStack: Install and Configuration

Let’s get our devstack files:


ubuntu@ubuntu-amd64:~$ git clone https://git.openstack.org/openstack-dev/devstack
Cloning into 'devstack'...
remote: Counting objects: 33096, done.
remote: Compressing objects: 100% (15655/15655), done.
remote: Total 33096 (delta 23512), reused 26231 (delta 17015)
Receiving objects: 100% (33096/33096), 6.48 MiB | 747.00 KiB/s, done.
Resolving deltas: 100% (23512/23512), done.
Checking connectivity... done.
 

Let’s set our local.conf file for this setup. We define the main directory to store our project and data files: “/opt/stack” and others. Also, eth0 would be our main network interface to connect all services like databases and MQ. I’ve disabled horizon and cinder to save memory for instances. The secondary port eth1 would be our external interface attached to our public bridge br-ex (check out my post as a reference of OpenVSwitch ports at the Network node).

I’ve added also the serial_console option. VSD would require access thru console to start doing anything.

Locate this file at ~/devstack folder. Here you have the file that I’ve executed:


[[local|localrc]]
DEST=/opt/stack
SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log

HOST_IP=10.0.2.15
SERVICE_HOST=10.0.2.15
MYSQL_HOST=10.0.2.15
RABBIT_HOST=10.0.2.15
GLANCE_HOSTPORT=10.0.2.15:9292

ADMIN_PASSWORD=demonuage
DATABASE_PASSWORD=demonuage
RABBIT_PASSWORD=demonuage
SERVICE_PASSWORD=demonuage

# Do not use Nova-Network
disable_service n-net
# Do not use Horizon & Cinder
disable_service horizon
disable_service c-api c-sch c-vol
# Enable Neutron
ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3
# Enable-Console
enable_service n-sproxy


## Neutron options
Q_USE_SECGROUP=True
FIXED_RANGE="192.168.1.0/24"
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=192.168.1.1
PRIVATE_SUBNET_NAME=Nuage-Priv01

PUBLIC_SUBNET_NAME=Nuage-Public
FLOATING_RANGE="10.0.3.15/27"
Q_FLOATING_ALLOCATION_POOL=start=10.0.3.20,end=10.0.3.30
PUBLIC_NETWORK_GATEWAY="10.0.3.2"
Q_L3_ENABLED=True
PUBLIC_INTERFACE=eth1

# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

Other important thing is to define just one default gateway. devstack resets interfaces over the process and you connection to the outside could be messy. I’ve change eth1 to static and remove its default gw at /etc/network/interfaces file as follow:


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
    dns-nameservers 8.8.8.8

# The sec interface
auto eth1
iface eth1 inet static
    address 10.0.3.15
    netmask 255.255.255.0
    dns-nameservers 8.8.8.8

Your session’s user must to have sudo privileges. Now run ./stack.sh and wait. wait.. wait… until you get this message:


========================
DevStack Components Timed
========================

run_process - 69 secs
test_with_retry - 4 secs
apt-get-update - 19 secs
pip_install - 100 secs
restart_apache_server - 5 secs
wait_for_service - 18 secs
apt-get - 41 secs


This is your host IP address: 10.0.2.15
This is your host IPv6 address: ::1
Keystone is serving at http://10.0.2.15:5000/
The default users are: admin and demo
The password: demonuage

You will get this network interface configuration:


ubuntu@ubuntu-amd64:~/devstack$ ifconfig –a
br-ex     Link encap:Ethernet  HWaddr 08:00:27:ea:81:23
          inet addr:10.0.3.15  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::34cf:80ff:fe38:387a/64 Scope:Link
          inet6 addr: 2001:db8::2/64 Scope:Global
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1284 (1.2 KB)  TX bytes:1166 (1.1 KB)

br-int    Link encap:Ethernet  HWaddr 02:59:41:8a:01:44
          inet6 addr: fe80::105f:7bff:fef7:813d/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:72 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:6416 (6.4 KB)  TX bytes:828 (828.0 B)

br-tun    Link encap:Ethernet  HWaddr d2:c5:95:1f:b2:41
          inet6 addr: fe80::2429:aff:fe60:2f8e/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:828 (828.0 B)

eth0      Link encap:Ethernet  HWaddr 08:00:27:96:dd:d0
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe96:ddd0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:33395 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20053 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:18895091 (18.8 MB)  TX bytes:3757597 (3.7 MB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:ea:81:23
          inet6 addr: fe80::a00:27ff:feea:8123/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:187 errors:0 dropped:0 overruns:0 frame:0
          TX packets:210 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:23744 (23.7 KB)  TX bytes:29124 (29.1 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:122326 errors:0 dropped:0 overruns:0 frame:0
          TX packets:122326 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:64243895 (64.2 MB)  TX bytes:64243895 (64.2 MB)

ovs-system Link encap:Ethernet  HWaddr ee:ca:dd:82:73:83
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

virbr0    Link encap:Ethernet  HWaddr ee:43:7c:62:b1:2d
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Let’s set our env file to easier our job. You can copy ~/devstack/userrc_early to your home dir and run “source userrc_early”.


ubuntu@ubuntu-amd64:~$ cat userrc
# Use this for debugging issues before files in accrc are created

# Set up password auth credentials now that Keystone is bootstrapped
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://10.0.2.15:35357
export OS_USERNAME=demo
export OS_USER_DOMAIN_ID=default
export OS_PASSWORD=demonuage
export OS_PROJECT_NAME=demo
export OS_PROJECT_DOMAIN_ID=default
export OS_REGION_NAME=RegionOne

DevStack: Building some resources to our lab

We’ll create some elements in our lab to test our devstack. First of fall, let’s create some key pairs into the demo project. I used my own key into this VM. If you don’t have your keys in .ssh folder you can create them with “ssh-keygen -t rsa”. Then let’s download a Ubuntu cloud image to take into glance. I will create a flavor to save some resources called pin.1.


openstack keypair create --public-key ~/.ssh/id_rsa.pub my-keypair
wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
glance image-create --name ubuntu-trusty-image --file trusty-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare
# need admin access to add flavors. use ~/devstack/userrc_early credentials
openstack flavor create --ram 1024 --vcpus 1 --disk 5 --public pin.1

I’ve created a couple of network based on the demo architecture that I’ve shown. We are done to create our first server called jumpbox and add a secondary interface to our private network:


openstack network create public-demo
openstack network create private-demo
neutron subnet-create --dns-nameserver 8.8.8.8 --name public-demo public-demo 10.101.0.0/24
# preparing our subnet to use jumpbox as gateway/dns
neutron subnet-create --dns-nameserver 192.168.101.3 --gateway 192.168.101.3 --name private-demo private-demo 192.168.101.0/24
nova boot --image ubuntu-trusty-image --nic net-name=public-demo --flavor pin.1 --key-name my-keypair jumpbox
# use "nova list" to check how it's going
neutron port-create private-demo
# use "neutron port-list" to check out the ID to use over the next command
nova interface-attach --port-id e11de213-3141-465d-85a6-5957261ca395 jumpbox

Don’t forget to create your security group to bring ssh access to your instance.


openstack security group create ssh-access
openstack security group rule create --proto tcp --src-ip 0.0.0.0/0 --dst-port 22 ssh-access
openstack server add security group jumpbox ssh-access

I’ve added a route to connect this new server (route add –net 10.101.0.0/24 gw 10.0.3.20) thru router1 (use “neutron router-list” to check it). In case to use a physical server you will be able to add floating IPs to access this from your laptop (don’t forget to add your public key to authorized_keys file into .ssh). To add a floating ip you need to use IDs of your port and the floating ip. Maybe you will have to create a floating IPs. Here you have an example:


# before to add a floatingip, you need to create interface at router1 to subnet public-demo using ID of the subnet over the next command
neutron router-interface-add router1 971db454-91e2-4a04-af6c-75591a2b758b
neutron floatingip-create public
# use "neutron floatingip-list" to check available ips
neutron floatingip-associate ab73e086-0c70-4d67-80ab-a2c740d25b62 32144a6c-2d47-4e96-97b9-b144855b6a5e

Connect your jumpbox instance via IP address (i.e. 10.101.0.3). Don’t forget to add a route in your server thru the router (i.e sudo route add -net 10.101.0.0/24 gw 10.0.3.20) or access it thru console.

DevStack-Nova: Enable console access

To get access to console you need to install “novaconsole” via:

pip install git+http://github.com/larsks/novaconsole.git

More details at github. Connect thru the follwing way to test your console connection.


ubuntu@ubuntu-amd64:~$ nova get-serial-console jumpbox
+--------+-----------------------------------------------------------------+
| Type   | Url                                                             |
+--------+-----------------------------------------------------------------+
| serial | ws://127.0.0.1:6083/?token=5c48b7ef-84dc-476c-a02e-7cd4a500ab68 |
+--------+-----------------------------------------------------------------+
ubuntu@ubuntu-amd64:~$ novaconsole  --url ws://127.0.0.1:6083/?token=5c48b7ef-84dc-476c-a02e-7cd4a500ab68
WARNING:novaconsole.client:connected to: ws://127.0.0.1:6083/?token=5c48b7ef-84dc-476c-a02e-7cd4a500ab68
WARNING:novaconsole.client:type "~." to disconnect

Ubuntu 14.04.4 LTS jumpbox ttyS0

jumpbox login:

See you into the next part!


Building a Nuage/OpenStack demo at Home: Giving PackStack a chance – Centos7

$
0
0

Howdy,

Exploring some ways to install OpenStack demos with Nuage and after some tries with DevStack, I’m amazed the way how those projects has been packed and make portable. However, DevStack has some challenge regarding its management. It’s painful (almost impossible) try to restart services after a server reboot. Manage every service thru different sessions handled by the old GNU screen it’s even worse.

I’ve got some references and guidelines from Scott Irwin that’d make PackStack more enjoyable. Then, I’m giving it more than a chance.

PackStack: Preparing my server

I’ll try it on my laptop’s VirtualBox. I’ve downloaded a Centos 7 OVA base image. I’ve imported to my VBox and I’ve set a Bridge network interface (192.168.1.15/24). Don’t forget to set this interface in promiscuos mode allowing all kind of traffic. Range between 192.168.1.2 to 192.168.1.50 is out of DHCP pool. Also, I’ve set 8GB of memory to play with a couple of virtual instances later. A forwarding rule to connect thru ssh from my laptop’s terminal.

Install net tools: sudo yum -y install net-tools

Edit you /etc/hosts and /etc/hostname files to your own settings. I’ve used “osc01.nuage.lab” and IP address 192.168.1.15. Also, modify you ifcfg-enp0s3 file and resolv.conf.


[centos@osc01 ~]$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
HWADDR="08:00:27:0B:BC:9D"
TYPE="Ethernet"
BOOTPROTO=static
NM_CONTROLLED=no
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
NAME="enp0s3"
UUID="dfa5c587-f319-41dc-b7da-84fe77bf4f85"
ONBOOT="yes"
IPADDR=192.168.1.15
PREFIX=24
GATEWAY=192.168.1.254
DNS=192.168.1.254
[centos@osc01 ~]$ cat /etc/resolv.conf
search nuage.lab
nameserver 192.168.1.254

Stop and disable NetworkManager and Firewall:


sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo systemctl stop NetworkManager
sudo systemctl disable NetworkManager
sudo systemctl start network

Edit /etc/selinux/config and change SELINUX=disabled
Some OS’s need more than just disabled selinux from this file. I had to change also /etc/grub2.conf as the following way (manage this file this precaution).

This step si very important is you want to save resources in your laptop.


### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/10_linux ###
menuentry 'CentOS Linux (3.10.0-123.9.2.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-123.el7.x86_64-advanced-f12869d8-bd8f-40b9-98fa-bbbdbf4d0301' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod xfs
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
      search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  aca5ee7d-3e13-43ac-8dd3-a1486f5948e4
    else
      search --no-floppy --fs-uuid --set=root aca5ee7d-3e13-43ac-8dd3-a1486f5948e4
    fi
    linux16 /vmlinuz-3.10.0-123.9.2.el7.x86_64 root=/dev/mapper/centos-root ro rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet LANG=en_US.UTF-8 selinux=0
    initrd16 /initramfs-3.10.0-123.9.2.el7.x86_64.img
}
#
#... more boring lines
#

Execute sudo yum -y update and reboot

Check you centos release thru sudo rpm --query centos-release. I’ve got centos-release-7-2.1511.el7.centos.2.10.x86_64

Packstack: Set NTP Client settings

Set you timezone (My case is US/Central): sudo ln -s /usr/share/zoneinfo/US/Central /etc/localtime. You may need to delete /etc/localtime first.

Check your /etc/ntp.conf file and do a manual sync:


[root@osc01 ~]# service ntpd stop
Shutting down ntpd:                                        [  OK  ]
[root@osc01 ~]# ntpdate -u 50.22.155.163
12 Apr 10:59:48 ntpdate[2317]: step time server 50.22.155.163 offset 1424.472299 sec
[root@osc01 ~]# service ntpd start
Starting ntpd:                                             [  OK  ]
[root@osc01 ~]# ntpstat #as many times take to sync up
synchronised to NTP server (152.2.133.54) at stratum 2
   time correct to within 1049 ms
   polling server every 64 s

PackStack: Installing and configuring

Setup the RDO repositories thru the followin way: yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm


[centos@ocs01 ~]$ sudo yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm
Loaded plugins: fastestmirror
rdo-release-liberty-2.noarch.rpm                                                                                                                           | 5.1 kB  00:00:00
Examining /var/tmp/yum-root-RTP070/rdo-release-liberty-2.noarch.rpm: rdo-release-liberty-2.noarch
Marking /var/tmp/yum-root-RTP070/rdo-release-liberty-2.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package rdo-release.noarch 0:liberty-2 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================================================================================================
 Package                                 Arch                               Version                               Repository                                                 Size
==================================================================================================================================================================================
Installing:
 rdo-release                             noarch                             liberty-2                             /rdo-release-liberty-2.noarch                             1.4 k

Transaction Summary
==================================================================================================================================================================================
Install  1 Package

Total size: 1.4 k
Installed size: 1.4 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : rdo-release-liberty-2.noarch                                                                                                                                   1/1
  Verifying  : rdo-release-liberty-2.noarch                                                                                                                                   1/1

Installed:
  rdo-release.noarch 0:liberty-2

Complete!

Install PackStack: yum install -y openstack-packstack


[centos@ocs01 ~]$ sudo yum install -y openstack-packstack
Loaded plugins: fastestmirror
openstack-liberty                                                                                                                                          | 2.9 kB  00:00:00
openstack-liberty/x86_64/primary_db                                                                                                                        | 544 kB  00:00:00
Loading mirror speeds from cached hostfile
 * base: repo1.dal.innoscale.net
 * extras: pubmirrors.dal.corespace.com
 * updates: reflector.westga.edu
Resolving Dependencies
--> Running transaction check
---> Package openstack-packstack.noarch 1:7.0.0-0.10.dev1682.g42b3426.el7 will be installed
--> Processing Dependency: openstack-packstack-puppet = 1:7.0.0-0.10.dev1682.g42b3426.el7 for package: 1:openstack-packstack-7.0.0-0.10.dev1682.g42b3426.el7.noarch
--> Processing Dependency: openstack-puppet-modules >= 2014.2.10 for package: 1:openstack-packstack-7.0.0-0.10.dev1682.g42b3426.el7.noarch
#
#... some boring lines
#
Installed:
  openstack-packstack.noarch 1:7.0.0-0.10.dev1682.g42b3426.el7

Dependency Installed:
  PyYAML.x86_64 0:3.10-11.el7                                              jbigkit-libs.x86_64 0:2.0-11.el7                   libjpeg-turbo.x86_64 0:1.2.90-5.el7
  libtiff.x86_64 0:4.0.3-14.el7                                            libwebp.x86_64 0:0.3.0-3.el7                       libyaml.x86_64 0:0.1.4-11.el7_0
  openstack-packstack-puppet.noarch 1:7.0.0-0.10.dev1682.g42b3426.el7      openstack-puppet-modules.noarch 1:7.0.1-1.el7      pyOpenSSL.noarch 0:0.15.1-1.el7
  python-docutils.noarch 0:0.11-0.2.20130715svn7687.el7                    python-enum34.noarch 0:1.0.4-1.el7                 python-idna.noarch 0:2.0-1.el7
  python-ipaddress.noarch 0:1.0.7-4.el7                                    python-netaddr.noarch 0:0.7.18-1.el7               python-pillow.x86_64 0:2.0.0-19.gitd1c6db8.el7
  python-ply.noarch 0:3.4-10.el7                                           python-pycparser.noarch 0:2.14-1.el7               python-six.noarch 0:1.9.0-2.el7
  python2-cffi.x86_64 0:1.5.2-1.el7                                        python2-cryptography.x86_64 0:1.2.1-3.el7          python2-pyasn1.noarch 0:0.1.9-6.el7.1
  ruby.x86_64 0:2.0.0.598-25.el7_1                                         ruby-irb.noarch 0:2.0.0.598-25.el7_1               ruby-libs.x86_64 0:2.0.0.598-25.el7_1
  rubygem-bigdecimal.x86_64 0:1.2.0-25.el7_1                               rubygem-io-console.x86_64 0:0.4.2-25.el7_1         rubygem-json.x86_64 0:1.7.7-25.el7_1
  rubygem-psych.x86_64 0:2.0.0-25.el7_1                                    rubygem-rdoc.noarch 0:4.0.0-25.el7_1               rubygems.noarch 0:2.0.14-25.el7_1

Complete!

Packstack: configuring your OpenStack instance in a box

You can install this with the default settings: packstack --allinone

Or create you own packstack-answer file as mine. use packstack --answer-file=/your/answer/file

However, I’ve decided to use bridge interfaces after getting some bad experiences with the NAT’s ones. I’ve turned enp0s3 interface into a port in br-ex bridge. That way I will be able to access any instance with a floating IP from my LAN at home.

Said that, you can start either thru a command like this:packstack --allinone --provision-demo=n --os-neutron-ovs-bridge-mappings=extnet:br-ex --os-neutron-ovs-bridge-interfaces=br-ex:enp0s3 --os-neutron-ml2-type-drivers=vxlan,flat --os-cinder-install=n --os-swift-install=n --os-ceilometer-install=n --nagios-install=n

Or using an answer file like this (I’ve intentionally removed services like cinder and swift among others):


[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=n
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_SAHARA_INSTALL=n
CONFIG_HEAT_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=n
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.168.1.15
CONFIG_COMPUTE_HOSTS=192.168.1.15
CONFIG_NETWORK_HOSTS=192.168.1.15
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_STORAGE_HOST=192.168.1.15
CONFIG_USE_EPEL=n
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_OPTIONAL=y
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ocs01.nuage.lab
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ocs01.nuage.lab
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.168.1.15
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.168.1.15
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=6fa4f04edee8422e
CONFIG_KEYSTONE_DB_PW=9b528f4cf1034fa9
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=baf9dc7f10bc4e959196123210346f5e
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=ab111b7f96d84895
CONFIG_KEYSTONE_DEMO_PW=60fe9990c4304f4e
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.168.1.15
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=bb1db0b842f04b74
CONFIG_GLANCE_KS_PW=0425826680bc4ede
CONFIG_GLANCE_BACKEND=file
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=41f2fe944b784fbf
CONFIG_NOVA_KS_PW=83ea3891ab7b4ed8
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NEUTRON_KS_PW=bd0cc982cb8746c2
CONFIG_NEUTRON_DB_PW=6e958cec52c74ea4
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=ed66dd7989cb4fc0
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VNI_RANGES=10:100
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_ML2_SUPPORTED_PCI_VENDOR_DEVS=['15b3:1004', '8086:10ca']
CONFIG_NEUTRON_ML2_SRIOV_AGENT_REQUIRED=n
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:enp0s3
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=072e59e4d9a5416eb4706ebcaa9dd814
CONFIG_PROVISION_DEMO=n
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_OVS_BRIDGE=y
CONFIG_MONGODB_HOST=192.168.1.15

Here you have the output of my installation process:


[root@ocs01 ~(keystone_admin)]# packstack --answer-file=packstack-answer-ocs01.bridge2
Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20160419-224207-ihA0Zh/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
Setting up ssh keys                                  [ DONE ]
Preparing servers                                    [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Setting up CACERT                                    [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding MariaDB manifest entries                      [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding OpenStack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Neutron VPNaaS Agent manifest entries         [ DONE ]
Adding Neutron FWaaS Agent manifest entries          [ DONE ]
Adding Neutron LBaaS Agent manifest entries          [ DONE ]
Adding Neutron API manifest entries                  [ DONE ]
Adding Neutron Keystone manifest entries             [ DONE ]
Adding Neutron L3 manifest entries                   [ DONE ]
Adding Neutron L2 Agent manifest entries             [ DONE ]
Adding Neutron DHCP Agent manifest entries           [ DONE ]
Adding Neutron Metering Agent manifest entries       [ DONE ]
Adding Neutron Metadata Agent manifest entries       [ DONE ]
Adding Neutron SR-IOV Switch Agent manifest entries  [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Adding OpenStack Client manifest entries             [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding post install manifest entries                 [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.1.15_prescript.pp
192.168.1.15_prescript.pp:                           [ DONE ]
Applying 192.168.1.15_amqp.pp
Applying 192.168.1.15_mariadb.pp
192.168.1.15_amqp.pp:                                [ DONE ]
192.168.1.15_mariadb.pp:                             [ DONE ]
Applying 192.168.1.15_keystone.pp
Applying 192.168.1.15_glance.pp
192.168.1.15_keystone.pp:                            [ DONE ]
192.168.1.15_glance.pp:                              [ DONE ]
Applying 192.168.1.15_api_nova.pp
192.168.1.15_api_nova.pp:                            [ DONE ]
Applying 192.168.1.15_nova.pp
192.168.1.15_nova.pp:                                [ DONE ]
Applying 192.168.1.15_neutron.pp
192.168.1.15_neutron.pp:                             [ DONE ]
Applying 192.168.1.15_osclient.pp
Applying 192.168.1.15_horizon.pp
192.168.1.15_osclient.pp:                            [ DONE ]
192.168.1.15_horizon.pp:                             [ DONE ]
Applying 192.168.1.15_postscript.pp
192.168.1.15_postscript.pp:                          [ DONE ]
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.1.15. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.168.1.15/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * The installation log file is available at: /var/tmp/packstack/20160419-224207-ihA0Zh/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20160419-224207-ihA0Zh/manifests

Packstack: preparing my external network and floating IP range

Most of this info I’ve got it from RDO’s post “Neutron with existing external network”.

Just a look up of what we have so far:


[root@ocs01 ~]# ovs-vsctl show
8353c231-7d13-4680-8486-a70521ec2ff2
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
        Port "enp0s3"
            Interface "enp0s3"
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    ovs_version: "2.4.0"

I will set a Nuage lab tenant using a floating pool over my local network. To do that we need to set a external network and later a subnet with a range out of my local network DHCP pool.


[root@ocs01 ~(keystone_admin)]#  . keystonerc_admin 
[root@ocs01 ~(keystone_admin)]# neutron net-create external_network --provider:network_type flat --provider:physical_network extnet  --router:external
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | e9b19556-1846-473f-9dac-f5b53e65d6d4 |
| mtu                       | 0                                    |
| name                      | external_network                     |
| provider:network_type     | flat                                 |
| provider:physical_network | extnet                               |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 8adad4c02b6c43a3a5bdc705596ff938     |
+---------------------------+--------------------------------------+
[root@ocs01 ~(keystone_admin)]# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.1.17,end=192.168.1.25 --gateway=192.168.1.254 external_network 192.168.1.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "192.168.1.17", "end": "192.168.1.25"} |
| cidr              | 192.168.1.0/24                                   |
| dns_nameservers   |                                                  |
| enable_dhcp       | False                                            |
| gateway_ip        | 192.168.1.254                                    |
| host_routes       |                                                  |
| id                | dbe5ea98-4f26-43e0-918d-42fad5b3b4f1             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | public_subnet                                    |
| network_id        | e9b19556-1846-473f-9dac-f5b53e65d6d4             |
| subnetpool_id     |                                                  |
| tenant_id         | 8adad4c02b6c43a3a5bdc705596ff938                 |
+-------------------+--------------------------------------------------+

Now we’ll create our Nuage demo tenant.


[root@ocs01 ~(keystone_admin)]# openstack project create --enable nuage
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | None                             |
| enabled     | True                             |
| id          | 16ce36b9f7d54b518b02f001e7170821 |
| name        | nuage                            |
+-------------+----------------------------------+
[root@ocs01 ~(keystone_admin)]# openstack user create --project nuage --password foo --email mau@nuage.lab --enable nuage
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| email      | mau@nuage.lab                    |
| enabled    | True                             |
| id         | 06df4f2fa1ee4064b33c54bce7c7e7db |
| name       | nuage                            |
| project_id | 16ce36b9f7d54b518b02f001e7170821 |
| username   | nuage                            |
+------------+----------------------------------+
[root@ocs01 ~(keystone_admin)]# curl http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img | glance \
>          image-create --name='cirros image' --visibility=public --container-format=bare --disk-format=qcow2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 12.6M  100 12.6M    0     0  1366k      0  0:00:09  0:00:09 --:--:-- 1540k
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2016-04-21T01:11:11Z                 |
| disk_format      | qcow2                                |
| id               | 34d46776-1a40-46d4-895b-cb626d50a200 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros image                         |
| owner            | 487c319958fb4e3097ba1cd7fa0e3ca9     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2016-04-21T01:11:19Z                 |
| virtual_size     | None                                 |
| visibility       | public                               |
+------------------+--------------------------------------+
[root@ocs01 ~(keystone_admin)]# cp keystonerc_admin keystonerc_nuage
#
# Editing file keystonerc_nuage
#
[root@ocs01 ~(keystone_admin)]# cat keystonerc_nuage
unset OS_SERVICE_TOKEN
export OS_USERNAME=nuage
export OS_PASSWORD=foo
export OS_AUTH_URL=http://192.168.1.15:5000/v2.0
export PS1='[\u@\h \W(keystone_nuage)]\$ '

export OS_TENANT_NAME=nuage
export OS_REGION_NAME=RegionOne

We’ll switch over our new tenant to create a router and connect it to our external_network. That way, any instance connected to this router could be got a floating address on the range that we’ve just prepared.


[root@ocs01 ~(keystone_admin)]# . keystonerc_nuage
[root@ocs01 ~(keystone_nuage)]# neutron router-create router1
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 36883167-2404-47cf-a86f-bab47d6684a8 |
| name                  | router1                              |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 16ce36b9f7d54b518b02f001e7170821     |
+-----------------------+--------------------------------------+
[root@ocs01 ~(keystone_nuage)]# neutron router-gateway-set router1 external_network
Set gateway for router router1
[root@ocs01 ~(keystone_nuage)]# neutron net-create nuage-lab
Created a new network:
+-----------------+--------------------------------------+
| Field           | Value                                |
+-----------------+--------------------------------------+
| admin_state_up  | True                                 |
| id              | d46b2c8a-5ed1-4bb7-bac5-053bb4a8bfc9 |
| mtu             | 0                                    |
| name            | nuage-lab                            |
| router:external | False                                |
| shared          | False                                |
| status          | ACTIVE                               |
| subnets         |                                      |
| tenant_id       | 16ce36b9f7d54b518b02f001e7170821     |
+-----------------+--------------------------------------+
[root@ocs01 ~(keystone_nuage)]# neutron subnet-create --name nuage-subnet nuage-lab 192.168.101.0/24
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field             | Value                                                |
+-------------------+------------------------------------------------------+
| allocation_pools  | {"start": "192.168.101.2", "end": "192.168.101.254"} |
| cidr              | 192.168.101.0/24                                     |
| dns_nameservers   |                                                      |
| enable_dhcp       | True                                                 |
| gateway_ip        | 192.168.101.1                                        |
| host_routes       |                                                      |
| id                | 7bb59ca6-7547-4134-a6a1-af0ff166525a                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | nuage-subnet                                         |
| network_id        | d46b2c8a-5ed1-4bb7-bac5-053bb4a8bfc9                 |
| subnetpool_id     |                                                      |
| tenant_id         | 16ce36b9f7d54b518b02f001e7170821                     |
+-------------------+------------------------------------------------------+
[root@ocs01 ~(keystone_nuage)]# neutron router-interface-add router1 nuage-subnet
Added interface 29e44fd3-0ac3-4fa1-a479-ea1f12f4646a to router router1.

Following you will see how our router has been set:


[root@ocs01 ~(keystone_nuage)]# neutron router-show router1
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                                                                                                    |
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                                                                                                                     |
| external_gateway_info | {"network_id": "e9b19556-1846-473f-9dac-f5b53e65d6d4", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "dbe5ea98-4f26-43e0-918d-42fad5b3b4f1", "ip_address": "192.168.1.17"}]} |
| id                    | 36883167-2404-47cf-a86f-bab47d6684a8                                                                                                                                                     |
| name                  | router1                                                                                                                                                                                  |
| routes                |                                                                                                                                                                                          |
| status                | ACTIVE                                                                                                                                                                                   |
| tenant_id             | 16ce36b9f7d54b518b02f001e7170821                                                                                                                                                         |
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Now, I’ll take from horizon.

I’ve created a cirrus instance to test my new brand openstack implementation:

packstack install pinrojas neutron nuage demo at home 01

As you can see over the following picture, I have direct access to my Laptop’s IP address from my cirrus instance😉

packstack install pinrojas neutron nuage demo at home 03.png

See you over the next part!

 


Nuage VSC – Modify QCOW2 images with guestfish

$
0
0

Hi there,

This post is useful to anyone planing to use guestfish to make some changes to any qcow2 disk image file. You can solve issues like change user settings, or define static ip addresses, or change grub settings.

I am using guestfish to change some configurations into my vsc_singledisk.qcow2 image. Why? because all the changes into VSC must be done thru a console. However, If you are thinking to run this on OpenStack liberty/kvm. It would be a issue. Most of the instances has been managed thru vnc (graphics). Then, I’ve added network settings to bof.cfg file to make this instance boot with a specific ip address. I can follow the next steps of its setup thru ssh😉

Install your guestfish and libvirtd packages

I’ve downloaded a centos7 minimal OVA file to my Mac. I’ve imported it to my vbox and make run. I’ve used to have selinux disabled. Maybe you will need to change it to permissible.

First of all you have to install kvm and the guestfish. Then, you will have to disbale NetworManager and firewalld. Restart the libvirtd and set LIBGUESTFS_BACKEND=direct. I’ve copied my qcow2 file thru scp previously. You will have to change the ownership to qemu:qemu to this file to make it work.


[root@jumbox ~]# yum install -y qemu-kvm libvirt libvirt-python libguestfs-tools virt-install
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: bay.uchicago.edu
 * extras: centos-distro.cavecreek.net
 * updates: centos.mia.host-engine.com
#
# many boring lines
# many boring lines
#

Installed:
  libguestfs-tools.noarch 1:1.28.1-1.55.el7.centos.2        libvirt.x86_64 0:1.2.17-13.el7_2.4        qemu-kvm.x86_64 10:1.5.3-105.el7_2.4
  virt-install.noarch 0:1.2.1-8.el7

Dependency Installed:
  libguestfs.x86_64 1:1.28.1-1.55.el7.centos.2                        libguestfs-tools-c.x86_64 1:1.28.1-1.55.el7.centos.2
  libvirt-daemon-kvm.x86_64 0:1.2.17-13.el7_2.4                       perl-Sys-Guestfs.x86_64 1:1.28.1-1.55.el7.centos.2
  perl-Sys-Virt.x86_64 0:1.2.17-2.el7                                 perl-libintl.x86_64 0:1.20-12.el7

Complete!
[root@jumbox ~]# yum -y install guestfish
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: bay.uchicago.edu
 * extras: centos-distro.cavecreek.net
 * updates: centos.mia.host-engine.com
#
# many boring lines
# many boring lines
#
Installed:
  libguestfs-tools-c.x86_64 1:1.28.1-1.55.el7.centos.2

Complete!
[root@jumbox ~]# systemctl stop NetworkManager
[root@jumbox ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
[root@jumbox ~]# systemctl start network
[root@jumbox ~]# systemctl stop firewalld
[root@jumbox ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@jumbox ~]# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service
[root@jumbox ~]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list
 Id    Name                           State
----------------------------------------------------

virsh # exit
[root@jumbox ~]# chown qemu:qemu vsc_singledisk.qcow2
[root@jumbox ~]# export LIBGUESTFS_BACKEND=direct

Modify your files into your qcow2 image thru guestfish

Next lines will show you how to modify the qcow2 image file. This case I am modifying bof.cfg file at the root folder.


[root@jumbox ~]# mv vsc_singledisk.qcow2 vsc_singledisk_dhcp.qcow2
[root@jumbox ~]# guestfish --rw -a vsc_singledisk_dhcp.qcow2

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

> run
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
> mount /dev/sda1 /
> ls /
bof.cfg
config.cfg
nvram.dat
syslinux
timos
> vi /bof.cfg
> exit

The bof.cfg file that I’ve found is this:


primary-image        cf1:/timos/cpm.tim
primary-config       cf1:/config.cfg
autonegotiate
duplex               full
speed                100
wait                 3
persist              off
console-speed        115200

And this is how I’ve left it.


primary-image    cf1:\timos\cpm.tim
primary-config   cf1:\config.cfg
ip-address-dhcp
primary-dns      192.168.101.3
dns-domain       nuage.lab
static-route     0.0.0.0/1 next-hop 192.168.101.1
autonegotiate
duplex           full
speed            100
wait             3
persist          off
no li-local-save
no li-separate
console-speed    115200

See ya!


Building a Nuage/PackStack Demo at home – Part 2

$
0
0

First of all, just a reminder that I’m using a fanless server 8 cores / 32 GB RAM at home (details at Diego’s post). And you will need to read my previous post: BUILDING A NUAGE/OPENSTACK DEMO AT HOME: GIVING PACKSTACK A CHANCE – CENTOS7.

Also, I want to say thanks to Scott Irwin for his scripts and Remi Vichery for his prompt support with my VSC.

This second part I will show you how to install Nuage VSP on PackStack. Most of the time those are installed using just KVM. However, I think you’ll have more fun doing on OpenStack. That way also, we’ll help us to make this demo portable to any other OpenStack instance.

Next, you will see how I am configuring these lab:

pinrojas - nuage packstack lab diagram

These are the instances that I am planing to have when I finish this journey

pinrojas - nuage packstack lab table

Important Note: before upload VSC image, be sure to have read NUAGE VSC – MODIFY QCOW2 IMAGES WITH GUESTFISH

Check your lab settings before start anything

We’ll check what we have so far. PackStack is already installed. We’ve done over the first part.

Checking the networks and subnets:


[root@box01 ~(keystone_admin)]# openstack network list
+--------------------------------------+------------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+------------------+--------------------------------------+
| 9eec420a-eb76-4ebc-a814-3ce935b9bca2 | external_network | 407b139d-70b6-49c9-9056-e9211a41b7fb |
| 05235f6d-95fc-4455-a6a6-3d4077cab245 | nuage-lab | 60724bd0-8606-4c7a-bae1-7c31410dd456 |
+--------------------------------------+------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack network show 9eec420a-eb76-4ebc-a814-3ce935b9bca2
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| id | 9eec420a-eb76-4ebc-a814-3ce935b9bca2 |
| mtu | 0 |
| name | external_network |
| project_id | da64bceb671e4719b41de08c15e1eebe |
| provider:network_type | flat |
| provider:physical_network | extnet |
| provider:segmentation_id | None |
| router_type | External |
| shared | False |
| state | UP |
| status | ACTIVE |
| subnets | 407b139d-70b6-49c9-9056-e9211a41b7fb |
+---------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# neutron subnet-show 407b139d-70b6-49c9-9056-e9211a41b7fb
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------+
| allocation_pools | {"start": "192.168.1.27", "end": "192.168.1.33"} |
| cidr | 192.168.1.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.1.254 |
| host_routes | |
| id | 407b139d-70b6-49c9-9056-e9211a41b7fb |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | public_subnet |
| network_id | 9eec420a-eb76-4ebc-a814-3ce935b9bca2 |
| subnetpool_id | |
| tenant_id | da64bceb671e4719b41de08c15e1eebe |
+-------------------+--------------------------------------------------+

Checking router in the lab’s tenant.


[root@box01 ~(keystone_nuage)]# neutron router-list
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| b9d31b63-99c7-4d84-89e4-6c716210fb20 | nuage-router | {"network_id": "9eec420a-eb76-4ebc-a814-3ce935b9bca2", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "407b139d-70b6-49c9-9056-e9211a41b7fb", "ip_address": "192.168.1.27"}]} |
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@box01 ~(keystone_nuage)]# neutron subnet-list
+--------------------------------------+--------------+------------------+------------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+--------------+------------------+------------------------------------------------------+
| 60724bd0-8606-4c7a-bae1-7c31410dd456 | nuage-subnet | 192.168.101.0/24 | {"start": "192.168.101.2", "end": "192.168.101.254"} |
+--------------------------------------+--------------+------------------+------------------------------------------------------+
[root@box01 ~(keystone_admin)]# . keystonerc_nuage
[root@box01 ~(keystone_nuage)]# neutron router-list
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| b9d31b63-99c7-4d84-89e4-6c716210fb20 | nuage-router | {"network_id": "9eec420a-eb76-4ebc-a814-3ce935b9bca2", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "407b139d-70b6-49c9-9056-e9211a41b7fb", "ip_address": "192.168.1.27"}]} |
+--------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

We’ll update the subnet to define a dhcp address pool to avoid any conflict with our instances.


[root@box01 ~(keystone_nuage)]# neutron subnet-update --allocation-pool start=192.168.101.50,end=192.168.101.254 nuage-subnet
Updated subnet: nuage-subnet

Preparing our images and flavor

Create the flavor and upload the images to glance for our jumpbox (local NTP/DNS server), VSD, VSC and our nested PackStack (Controller and Nova servers).


[root@box01 ~(keystone_admin)]# openstack flavor create --ram 1024 --disk 10 --vcpus 1 --public nuage.tiny
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 10 |
| id | a9559f30-3914-4227-8201-5fd7e1262b3d |
| name | nuage.tiny |
| os-flavor-access:is_public | True |
| ram | 1024 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack flavor create --ram 4096 --disk 10 --vcpus 4 --public nuage.vsc
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 10 |
| id | 6a17cd1c-ee29-4f29-a4c9-14852a1e0394 |
| name | nuage.vsc |
| os-flavor-access:is_public | True |
| ram | 4096 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+

[root@box01 ~(keystone_admin)]# openstack flavor create --ram 8192 --disk 108 --vcpus 4 --public nuage.vsd
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 108 |
| id | d4a3eda0-b2e2-4d86-b28a-357e8b94166c |
| name | nuage.vsd |
| os-flavor-access:is_public | True |
| ram | 8192 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack flavor create --ram 2048 --disk 20 --vcpus 2 --public nuage.osc
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 20 |
| id | ba637f8a-aff4-4e53-b758-d946c2242b6d |
| name | nuage.osc |
| os-flavor-access:is_public | True |
| ram | 2048 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+----------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack flavor create --ram 5120 --disk 50 --vcpus 4 --public nuage.nova
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 50 |
| id | 88c0cc7c-8aca-4374-aad1-c54c955ab754 |
| name | nuage.nova |
| os-flavor-access:is_public | True |
| ram | 5120 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------+

Let’s install wget to download our CentOS7 image


[root@box01 ~(keystone_admin)]# yum -y install wget
Loaded plugins: fastestmirror
#
# some boring lines
# more boring lines
#
Installed:
wget.x86_64 0:1.14-10.el7_0.1

Complete!
[root@box01 ~(keystone_admin)]# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
--2016-05-05 18:18:14-- http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
Resolving cloud.centos.org (cloud.centos.org)... 162.252.80.138
Connecting to cloud.centos.org (cloud.centos.org)|162.252.80.138|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 912654336 (870M)
Saving to: ‘CentOS-7-x86_64-GenericCloud.qcow2’

100%[================================================================================================================>] 912,654,336 5.66MB/s in 43s

2016-05-05 18:18:57 (20.3 MB/s) - ‘CentOS-7-x86_64-GenericCloud.qcow2’ saved [912654336/912654336]

Let’s create our jumbox image:


[root@box01 ~(keystone_admin)]# openstack image create --file CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --public --container-format bare centos7-image
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 6008a645f61baffe0d19dfe992def8a6 |
| container_format | bare |
| created_at | 2016-05-05T23:19:33Z |
| disk_format | qcow2 |
| id | e9ee4c2a-006b-4d53-a158-47ec6bb6c422 |
| min_disk | 0 |
| min_ram | 0 |
| name | centos7-image |
| owner | da64bceb671e4719b41de08c15e1eebe |
| protected | False |
| size | 912654336 |
| status | active |
| tags | [] |
| updated_at | 2016-05-05T23:19:43Z |
| virtual_size | None |
| visibility | private |
+------------------+--------------------------------------+

Create your VSD and VSC images. I’ve got them in my laptop, then I have to copy them thru scp.


[root@box01 ~(keystone_nuage)]# ls *.qcow2
CentOS-7-x86_64-GenericCloud.qcow2 vsc_singledisk.qcow2 VSD-3.2.6_230.qcow2
[root@box01 ~(keystone_nuage)]# . keystonerc_admin
[root@box01 ~(keystone_admin)]# openstack image create --file VSD-3.2.6_230.qcow2 --disk-format qcow2 --public --container-format bare VSD32R6
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | a1419434721c53bf3c848896c48de7d5 |
| container_format | bare |
| created_at | 2016-05-06T13:14:31Z |
| disk_format | qcow2 |
| id | aff1535d-570b-4e19-98de-9c27cde94784 |
| min_disk | 0 |
| min_ram | 0 |
| name | VSD32R6 |
| owner | da64bceb671e4719b41de08c15e1eebe |
| protected | False |
| size | 5573574656 |
| status | active |
| tags | [] |
| updated_at | 2016-05-06T13:15:22Z |
| virtual_size | None |
| visibility | private |
+------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack image create --file vsc_singledisk-dhcp.qcow2 --disk-format qcow2 --public --container-format bare VSC32R6
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 95a481632192ad8ea3f8701846b0c5ff |
| container_format | bare |
| created_at | 2016-05-06T13:31:55Z |
| disk_format | qcow2 |
| id | abcb1b0b-0389-4f07-b3a3-36bc2d0c0507 |
| min_disk | 0 |
| min_ram | 0 |
| name | VSC32R6 |
| owner | da64bceb671e4719b41de08c15e1eebe |
| protected | False |
| size | 45613056 |
| status | active |
| tags | [] |
| updated_at | 2016-05-06T13:31:56Z |
| virtual_size | None |
| visibility | private |
+------------------+--------------------------------------+

We need to create our keypair. I will use my laptop public key. And I will copy it as following:


usmovnmroja001:~ mroja001$ scp .ssh/id_rsa.pub root@192.168.1.25:/root
root@192.168.1.25's password:
id_rsa.pub 100% 414 0.4KB/s 00:00

Now, create your keypair using this public key file as following:


[root@box01 ~(keystone_admin)]# . keystonerc_nuage
[root@box01 ~(keystone_nuage)]# openstack keypair create --public-key id_rsa.pub pin-laptop
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | b6:01:9c:76:a6:e6:d8:04:38:27:5d:8f:92:20:f3:32 |
| name | pin-laptop |
| user_id | c91cd992e79149209c41416a55a661b1 |
+-------------+-------------------------------------------------+

Creating your servers

Time to create your servers: jumpbox (local NTP/DNS server), VSC and VSD.


[root@box01 ~(keystone_nuage)]# openstack server create --image centos7-image --flavor nuage.tiny --key-name pin-laptop --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.3 jumpbox
+--------------------------------------+------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | id9AYj3o7WqE |
| config_drive | |
| created | 2016-05-06T13:56:02Z |
| flavor | nuage.tiny (a9559f30-3914-4227-8201-5fd7e1262b3d) |
| hostId | |
| id | f71bb396-47a8-477f-8f6b-8390769cfa73 |
| image | centos7-image (e9ee4c2a-006b-4d53-a158-47ec6bb6c422) |
| key_name | pin-laptop |
| name | jumpbox |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 39e2f35bc10f4047b1ea77f79559807d |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2016-05-06T13:56:02Z |
| user_id | c91cd992e79149209c41416a55a661b1 |
+--------------------------------------+------------------------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack server create --image VSD32R6 --flavor nuage.vsd --key-name pin-laptop --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.4 vsd01
+--------------------------------------+--------------------------------------------------+
| Field | Value |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | SGsdF4DvkPVo |
| config_drive | |
| created | 2016-05-06T13:57:34Z |
| flavor | nuage.vsd (d4a3eda0-b2e2-4d86-b28a-357e8b94166c) |
| hostId | |
| id | 5befd9f3-98d5-404a-a1a7-ce1fa03127e8 |
| image | VSD32R6 (aff1535d-570b-4e19-98de-9c27cde94784) |
| key_name | pin-laptop |
| name | vsd01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 39e2f35bc10f4047b1ea77f79559807d |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2016-05-06T13:57:34Z |
| user_id | c91cd992e79149209c41416a55a661b1 |
+--------------------------------------+--------------------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack server create --image VSC32R6 --flavor nuage.vsc --key-name pin-laptop --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.5 vsc01
+--------------------------------------+--------------------------------------------------+
| Field | Value |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | wCM4DzJijau9 |
| config_drive | |
| created | 2016-05-06T13:58:10Z |
| flavor | nuage.vsc (6a17cd1c-ee29-4f29-a4c9-14852a1e0394) |
| hostId | |
| id | 77a75f63-4615-4479-ace2-e0b21e70a038 |
| image | VSC32R6 (abcb1b0b-0389-4f07-b3a3-36bc2d0c0507) |
| key_name | pin-laptop |
| name | vsc01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 39e2f35bc10f4047b1ea77f79559807d |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2016-05-06T13:58:10Z |
| user_id | c91cd992e79149209c41416a55a661b1 |
+--------------------------------------+--------------------------------------------------+
[root@box01 ~(keystone_nuage)]# nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------+
| f71bb396-47a8-477f-8f6b-8390769cfa73 | jumpbox | ACTIVE | - | Running | nuage-lab=192.168.101.3 |
| 77a75f63-4615-4479-ace2-e0b21e70a038 | vsc01 | ACTIVE | - | Running | nuage-lab=192.168.101.5 |
| 0f572cb6-d4a4-4b8a-b277-eb55fc859c68 | vsd01 | ACTIVE | - | Running | nuage-lab=192.168.101.4 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------+

Lab topology so far (remember use nuage/foo credentials to access your horizon at http://192.168.1.25/dashboard)

pinrojas - nuage lab topology packstack 01.png

 

Jumpbox: Creating your DNS and NTP local server

We’ll start configuring NTP and DNS service in jumpbox. Assign Floating IP to your jumpbox to get access from outside.


[root@box01 ~(keystone_nuage)]# openstack ip floating pool list
+------------------+
| Name |
+------------------+
| external_network |
+------------------+
[root@box01 ~(keystone_nuage)]# openstack ip floating create external_network
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| fixed_ip | None |
| id | ca767cc0-fc65-4d74-8e4a-d2ef555c6b0d |
| instance_id | None |
| ip | 192.168.1.28 |
| pool | external_network |
+-------------+--------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack ip floating add 192.168.1.28 jumpbox

Add security rules to default group to open ssh and ping port.

pinrojas - nuage lab packstack adding rules to security group.png

Let’s start with network settings…


usmovnmroja001:~ mroja001$ ssh centos@192.168.1.28
The authenticity of host '192.168.1.28 (192.168.1.28)' can't be established.
RSA key fingerprint is d9:f2:5e:95:96:94:48:a2:4a:63:2e:6b:e0:31:fa:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.28' (RSA) to the list of known hosts.
[centos@jumpbox ~]$ su -
password: #you need to change the password before
[root@jumpbox ~]# cat /etc/hosts
127.0.0.1 localhost
192.168.101.3 jumpbox jumpbox.nuage.lab
[root@jumpbox ~]# cat /etc/hostname
jumpbox.nuage.lab
[root@jumpbox ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
IPV6INIT="no"
PERSISTENT_DHCLIENT="1"
[root@jumpbox ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search nuage.lab
nameserver 192.168.1.254
[root@jumpbox ~]# ping www.google.com
PING www.google.com (64.233.176.99) 56(84) bytes of data.
64 bytes from yw-in-f99.1e100.net (64.233.176.99): icmp_seq=1 ttl=43 time=23.3 ms
64 bytes from yw-in-f99.1e100.net (64.233.176.99): icmp_seq=2 ttl=43 time=22.9 ms

Jumpbox: Install your DNS local server

Time to install bind and get our DNS.


[root@jumpbox ~]# yum -y install bind bind-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
#
#some boring lines....
#more boring lines....
Installed:
bind.x86_64 32:9.9.4-29.el7_2.3 bind-utils.x86_64 32:9.9.4-29.el7_2.3 

Dependency Installed:
bind-libs.x86_64 32:9.9.4-29.el7_2.3

Complete!

We have to create DNS zones and entries as following:


[root@jumpbox ~]# cat /etc/named.conf
acl "trusted" {
192.168.101.3; # ns1 - can be set to localhost
192.168.101.4;
192.168.101.5;
192.168.101.6;
192.168.101.7;
192.168.101.8;
};

options {
directory "/var/cache/bind";

recursion yes; # enables resursive queries
allow-recursion { trusted; }; # allows recursive queries from "trusted" clients
listen-on { 192.168.101.3; }; # ns1 private IP address - listen on private network only
allow-transfer { none; }; # disable zone transfers by default

forwarders {
8.8.8.8;
8.8.4.4;
};

};
include "/etc/named/named.conf.local";
[root@jumpbox ~]# cat /etc/named/named.conf.local
zone "nuage.lab" {
type master;
file "/etc/named/zones/db.nuage.lab"; # zone file path
};

zone "101.168.192.in-addr.arpa" {
type master;
file "/etc/named/zones/db.101.168.192"; # 192.168.101/24 subnet
};

[root@jumpbox ~]# cat /etc/named/zones/db.nuage.lab
;
; BIND data file for local loopback interface
;
$TTL 604800
@ IN SOA jumpbox.nuage.lab. admin.nuage.lab (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
; name servers - NS records
IN NS jumpbox.nuage.lab.

; name servers - A records
jumpbox.nuage.lab. IN A 192.168.101.3

; 192.168.101.0/16 - A records
vsd01.nuage.lab. IN A 192.168.101.4
xmpp IN CNAME vsd01
vsc01.nuage.lab. IN A 192.168.101.5
osc01.nuage.lab. IN A 192.168.101.6
nova01.nuage.lab. IN A 192.168.101.7
nova02.nuage.lab. IN A 192.168.101.7

; SRV records
_xmpp-client._tcp IN SRV 10 0 5222 vsd01.nuage.lab.
[root@jumpbox ~]# cat /etc/named/zones/db.101.168.192
;
; BIND reverse data file for local loopback interface
;
$TTL 604800
@ IN SOA jumpbox.nuage.lab. admin.nuage.lab. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers
IN NS jumpbox.nuage.lab.

; PTR Records
3 IN PTR jumpbox.nuage.lab. ; 192.168.101.3
4 IN PTR vsd01.nuage.lab. ; 192.168.101.4
5 IN PTR vsc01.nuage.lab. ; 192.168.101.5
6 IN PTR osc01.nuage.lab. ; 192.168.101.6
7 IN PTR nova01.nuage.lab. ; 192.168.101.7
8 IN PTR nova02.nuage.lab. ; 192.168.101.8
last settings to make our DNS works


[root@jumpbox ~]# mkdir /var/cache/bind
[root@jumpbox ~]# systemctl start named
[root@jumpbox ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search nuage.lab
nameserver 192.168.101.3

Test your local DNS


[root@jumpbox ~]# nslookup vsd01
Server: 192.168.101.3
Address: 192.168.101.3#53

Name: vsd01.nuage.lab
Address: 192.168.101.4

[root@jumpbox ~]# nslookup vsd01.nuage.lab
Server: 192.168.101.3
Address: 192.168.101.3#53

Name: vsd01.nuage.lab
Address: 192.168.101.4

[root@jumpbox ~]# nslookup 192.168.101.4
Server: 192.168.101.3
Address: 192.168.101.3#53

4.101.168.192.in-addr.arpa name = vsd01.nuage.lab.

Jumpbox: Install your NTP local server

We’ll install ntp as following:


[root@jumpbox ~]# yum -y install ntp
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
# boring lines
# more boring lines
# more boris lines...
Installed:
ntp.x86_64 0:4.2.6p5-22.el7.centos.1 

Dependency Installed:
autogen-libopts.x86_64 0:5.18-5.el7 ntpdate.x86_64 0:4.2.6p5-22.el7.centos.1

Complete!

Modify your ntp.conf file


[root@jumpbox ~]# cat /etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1

restrict 192.168.101.0 mask 255.255.255.0 nomodify notrap
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

server ntp1.jst.mfeed.ad.jp iburst
server ntp2.jst.mfeed.ad.jp iburst
server ntp3.jst.mfeed.ad.jp iburst

includefile /etc/ntp/crypto/pw

keys /etc/ntp/keys

disable monitor

Let’s speed up the sync as following:


[root@jumpbox ~]# ntpdate -u ntp3.jst.mfeed.ad.jp
6 May 15:08:52 ntpdate[16769]: adjust time server 210.173.160.87 offset 0.037419 sec
[root@jumpbox ~]# ntpdate -u ntp2.jst.mfeed.ad.jp
6 May 15:09:14 ntpdate[16770]: adjust time server 210.173.160.57 offset 0.020899 sec
[root@jumpbox ~]# systemctl start ntpd
[root@jumpbox ~]# ntpstat
synchronised to NTP server (210.173.160.27) at stratum 3
time correct to within 8132 ms
polling server every 64 s
[root@jumpbox ~]# ntpq -cpe -cas
remote refid st t when poll reach delay offset jitter
==============================================================================
+ntp1.jst.mfeed. 133.243.236.17 2 u 17 64 1 190.149 26.285 3.164
*ntp2.jst.mfeed. 133.243.236.17 2 u 16 64 1 169.770 18.778 2.302
+ntp3.jst.mfeed. 133.243.236.17 2 u 15 64 1 168.504 12.655 2.307

ind assid status conf reach auth condition last_event cnt
===========================================================
1 55973 943a yes yes none candidate sys_peer 3
2 55974 963a yes yes none sys.peer sys_peer 3
3 55975 9424 yes yes none candidate reachable 2

VSD: Configuring Virtualized Services Director v32.R6

Attach a floating ip to VSD instance and updating nameserver into the subnet
Don’t forget to add other rule in default security group to access local NTP.


[root@box01 ~]# . keystonerc_nuage
[root@box01 ~(keystone_nuage)]# openstack ip floating create external_network
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| fixed_ip | None |
| id | 91903e82-362b-4ab0-9bfb-437b443fa9ed |
| instance_id | None |
| ip | 192.168.1.29 |
| pool | external_network |
+-------------+--------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack ip floating add 192.168.1.29 vsd01
[root@box01 ~(keystone_nuage)]# neutron subnet-update --dns-nameserver 192.168.101.3 nuage-subnet
Updated subnet: nuage-subnet
[root@box01 ~(keystone_nuage)]# openstack security group rule create --proto udp --dst-port 123 default

Reboot you VSD01 to get the last change to your subnet
We’ll prepare server before the VSD installation: change network settings, add NTP server to ntp.conf, change timezone and syncup time.


usmovnmroja001:~ mroja001$ ssh root@192.168.1.29
The authenticity of host '192.168.1.29 (192.168.1.29)' can't be established.
RSA key fingerprint is 7d:60:cd:5e:2e:08:6e:e1:f2:1d:28:a8:55:ae:23:7c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.29' (RSA) to the list of known hosts.
root@192.168.1.29's password:
Last login: Fri May 8 21:09:15 2015

Welcome to VSD. (3.2.6_230)

[root@host-192-168-101-4 ~]# hostname vsd01.nuage.lab
[root@host-192-168-101-4 ~]# hostname -f
vsd01.nuage.lab
[root@host-192-168-101-4 ~]# hostname
vsd01.nuage.lab
[root@host-192-168-101-4 ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
search nuage.lab
nameserver 192.168.101.3
[root@host-192-168-101-4 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
BOOTPROTO="dhcp"
[root@host-192-168-101-4 ~]# cat /etc/hosts
127.0.0.1 localhost
192.168.101.4 vsd01.nuage.lab vsd01
[root@host-192-168-101-4 ~]# cat /etc/ntp.conf
driftfile /var/lib/ntp/drift

restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery

restrict 127.0.0.1
restrict -6 ::1

server jumpbox.nuage.lab iburst
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

includefile /etc/ntp/crypto/pw

keys /etc/ntp/keys

[root@vsd01 ~]# ntpdate -u jumpbox.nuage.lab
6 May 20:57:18 ntpdate[1363]: adjust time server 192.168.101.3 offset 0.001624 sec
[root@vsd01 ~]# service ntpd start
Starting ntpd:
[root@vsd01 ~]# ntpstat
synchronised to NTP server (216.218.254.202) at stratum 2
time correct to within 51 ms
polling server every 64 s
[root@vsd01 ~]# rm /etc/localtime
rm: remove regular file `/etc/localtime'? y
[root@vsd01 ~]# sudo ln -s /usr/share/zoneinfo/US/Central /etc/localtime

All set to start our installation. execute /opt/vsd/install.sh


[root@vsd01 ~]# /opt/vsd/install.sh
-------------------------------------------------------------
V I R T U A L I Z E D S E R V I C E S D I R E C T O R Y
version 3.2.6_230
(c) 2015 Nuage Networks
-------------------------------------------------------------
Error: FQDN vsd01 missing the domain part
[root@vsd01 ~]# vi /etc/hosts
[root@vsd01 ~]# hostname -f
vsd01.nuage.lab
[root@vsd01 ~]# /opt/vsd/install.sh
-------------------------------------------------------------
V I R T U A L I Z E D S E R V I C E S D I R E C T O R Y
version 3.2.6_230
(c) 2015 Nuage Networks
-------------------------------------------------------------
VSD supports two configurations:
1) HA, consisting of 3 redundant installs of VSD with a cluster name node server.
2) Standalone, where all services are installed on a single machine.
Is this a redundant (r) or standalone (s) installation [r|s]? (default=s): s
WARN: Memory is at 7872 MB; 16GB is preferred
Deploy VSD on single host vsd01.nuage.lab ...
Continue [y|n]? (default=y): y
Starting VSD deployment. This may take as long as 20 minutes in some situations ...
VSD package deployment and configuration DONE. Please initialize VSD.
DONE: VSD deployed.
Starting VSD initialization. This may take as long as 20 minutes in some situations ...
A self-signed certificate has been generated to get you started using VSD.
VSD installed and the services have started.

Wait a few minutes….
Now you can check your services:


[root@vsd01 ~]#
[root@vsd01 ~]#
[root@vsd01 ~]# monit summary
The Monit daemon 5.15 uptime: 3m 

Program 'vsd-stats-status' Status failed
Program 'vsd-core-status' Status failed
Program 'vsd-common-status' Status ok
Process 'tca-daemon' Initializing
Program 'tca-daemon-status' Initializing
Process 'stats-collector' Initializing
Program 'stats-collector-status' Initializing
Process 'opentsdb' Running
Program 'opentsdb-status' Status failed
Program 'ntp-status' Status ok
Process 'mysql' Running
Program 'mysql-status' Status ok
Process 'mediator' Running
Program 'mediator-status' Initializing
File 'jboss-console-log' Accessible
File 'monit-log' Accessible
File 'mediator-out' Does not exist
File 'stats-out' Does not exist
File 'tca-daemon-out' Does not exist
Program 'keyserver-status' Status ok
Process 'jboss' Running
Program 'jboss-status' Status ok
Process 'hbase' Running
Program 'hbase-status' Status ok
Program 'ejbca-status' Status ok
Process 'ejabberd' Running
Program 'ejabberd-status' Status ok
System 'vsd01.nuage.lab' Running

It’s important to understand how to gracefully restart these services as following (you need to wait some time between commands until services come up ‘ok’). We’ll keep stats down to avoid annoying messages later. This is matter for other post.


[root@vsd01 ~]# monit -g vsd-stats stop
# Wait for all the vsd-stats services to show as “Not Monitored”.
[root@vsd01 ~]# monit -g vsd-core stop
# Wait for all the vsd-core services to show as “Not Monitored”.
[root@vsd01 ~]# monit -g vsd-common stop
# Wait for all the vsd-common services to show as “Not Monitored”.
[root@vsd01 ~]# monit -g vsd-common start
# Wait for all the vsd-common services to show as “status ok”.
[root@vsd01 ~]# monit -g vsd-core start
# Wait for all the vsd-common services to show as “status ok”.
# I will keep vsd-stats down
[root@vsd01 ~]# monit summary
The Monit daemon 5.15 uptime: 17m

Program 'vsd-stats-status' Not monitored
Program 'vsd-core-status' Status ok
Program 'vsd-common-status' Status ok
Process 'tca-daemon' Not monitored
Program 'tca-daemon-status' Not monitored
Process 'stats-collector' Not monitored
Program 'stats-collector-status' Not monitored
Process 'opentsdb' Not monitored
Program 'opentsdb-status' Not monitored
Program 'ntp-status' Status ok
Process 'mysql' Running
Program 'mysql-status' Status ok
Process 'mediator' Running
Program 'mediator-status' Status ok
File 'jboss-console-log' Accessible
File 'monit-log' Accessible
File 'mediator-out' Accessible
File 'stats-out' Accessible
File 'tca-daemon-out' Accessible
Program 'keyserver-status' Status failed
Process 'jboss' Running
Program 'jboss-status' Status ok
Process 'hbase' Not monitored
Program 'hbase-status' Not monitored
Program 'ejbca-status' Status ok
Process 'ejabberd' Running
Program 'ejabberd-status' Status ok
System 'vsd01.nuage.lab' Running

We need to open tcp 8443 port to access vsd console
Switch again to your OS controller and add the required security group.


[root@box01 ~(keystone_nuage)]# openstack security group create vsd
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| description | vsd |
| id | 7ff1256c-aeec-4dac-9cf8-ff6ae9c7ab04 |
| name | vsd |
| rules | [] |
| tenant_id | 39e2f35bc10f4047b1ea77f79559807d |
+-------------+--------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack security group rule create --proto tcp --dst-port 8443 vsd
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| group | {} |
| id | 5a82cacf-b846-4307-a059-7640154ff24b |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | 7ff1256c-aeec-4dac-9cf8-ff6ae9c7ab04 |
| port_range | 8443:8443 |
+-----------------+--------------------------------------+
[root@box01 ~(keystone_nuage)]# openstack server add security group vsd01 vsd

It’s time to access your VSD server. use cpsroot/cpsroot credentials.

pinrojas - vsd nuage packstack console.png

You’ll need a license. comment this post to figure it out that.

pinrojas - license vsd console nuage lab packstack.png

VSC: Installing SDN Controller

First of all, you will have to change the qcow2 image. please check my post: NUAGE VSC – MODIFY QCOW2 IMAGES WITH GUESTFISH
This is what will you have on your console if everything was ok (Don’t get confuse is you see the screen stuck at the booting… state)

pinrojas - vsc screen nuage lab

Ping from vsd01 to your brand new VSC (vsc01 / 192.168.101.5) to check your installation as following


[root@vsd01 ~]# ping 192.168.101.5
PING 192.168.101.5 (192.168.101.5) 56(84) bytes of data.
64 bytes from 192.168.101.5: icmp_seq=1 ttl=64 time=2.70 ms
64 bytes from 192.168.101.5: icmp_seq=2 ttl=64 time=0.621 ms
^C
--- 192.168.101.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1244ms
rtt min/avg/max/mdev = 0.621/1.661/2.702/1.041 ms
[root@vsd01 ~]# ssh admin@192.168.101.5
The authenticity of host '192.168.101.5 (192.168.101.5)' can't be established.
RSA key fingerprint is 47:e6:d6:33:9f:d7:cb:fa:ab:83:89:28:28:02:8c:56.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.101.5' (RSA) to the list of known hosts.
TiMOS-DC-C-3.2.6-228 cpm/i386 NUAGE VSC Copyright (c) 2000-2016 Alcatel-Lucent.
All rights reserved. All use subject to applicable license agreements.
Built on Tue Jan 26 21:42:10 PST 2016 [d6274a] by builder in /rel3.2-DC/b1/6-228/panos/main

admin@192.168.101.5's password:

A:vm1#
A:vm1#
A:vm1#
A:vm1#
A:vm1# show bof
===============================================================================
BOF (Memory)
===============================================================================
primary-image cf1:\timos\cpm.tim
primary-config cf1:\config.cfg
ip-address-dhcp
address 192.168.101.5/24 active
primary-dns 192.168.101.3
dns-domain openstacklocal
autonegotiate
duplex full
speed 100
wait 3
persist off
no li-local-save
no li-separate
console-speed 115200
===============================================================================
A:vm1#

We’ll add a route and our domain to bof.cfg file


A:vm1# bof
A:vm1>bof# dns-domain nuage.lab
*A:vm1>bof# static-route 0.0.0.0/1 next-hop 192.168.101.1
*A:vm1>bof# save
Writing BOF to cf1:/bof.cfg ... OK
Completed.
A:vm1>bof# exit
A:vm1# show bof
===============================================================================
BOF (Memory)
===============================================================================
primary-image cf1:\timos\cpm.tim
primary-config cf1:\config.cfg
ip-address-dhcp
address 192.168.101.5/24 active
primary-dns 192.168.101.3
dns-domain nuage.lab
static-route 0.0.0.0/1 next-hop 192.168.101.1
autonegotiate
duplex full
speed 100
wait 3
persist off
no li-local-save
no li-separate
console-speed 115200
===============================================================================

Now, we’ll configure NTP and time zone.


A:vm1# configure system
A:vm1>config>system# name vsd01
*A:vsd01>config>system# snmp
*A:vsd01>config>system>snmp# exit
*A:vsd01>config>system# time
*A:vsd01>config>system>time# ntp
*A:vsd01>config>system>time>ntp# server 192.168.101.3
*A:vsd01>config>system>time>ntp# no shutdown
*A:vsd01>config>system>time>ntp# exit
*A:vsd01>config>system>time# sntp
*A:vsd01>config>system>time>sntp# shutdown
*A:vsd01>config>system>time>sntp# exit
*A:vsd01>config>system>time# dst-zone
*A:vsd01>config>system>time# dst-zone CST
*A:vsd01>config>system>time>dst-zone# start second sunday march 02:00
*A:vsd01>config>system>time>dst-zone# end first sunday november 02:00
*A:vsd01>config>system>time>dst-zone# exit
*A:vsd01>config>system>time# zone CST
*A:vsd01>config>system>time# exit
*A:vsd01>config>system# thresholds
*A:vsd01>config>system>thresholds# rmon
*A:vsd01>config>system>thresh>rmon# exit
*A:vsd01>config>system>thresholds# exit
*A:vsd01>config>system# exit
*A:vsd01#

Before save our configuration, we’ll set vsd connection thru xmpp as following:


*A:vsd01#
*A:vsd01# exit all
*A:vsd01# configure vswitch-controller
*A:vsd01>config>vswitch-controller# xmpp-server vsd01:password@vsd01.nuage.lab
*A:vsd01>config>vswitch-controller# open-flow
*A:vsd01>config>vswitch-controller>open-flow# exit
*A:vsd01>config>vswitch-controller# xmpp
*A:vsd01>config>v-switch-controller>xmpp# exit
*A:vsd01>config>vswitch-controller# ovsdb
*A:vsd01>config>vswitch-controller>ovsdb# exit
*A:vsd01>config>vswitch-controller# exit
*A:vsd01#
*A:vsd01# admin save

Now, let’s see if everything is ok and your VSC is connected to your VSD


A:vsd01# show vswitch-controller vsd detail 

===============================================================================
VSD Server Table
===============================================================================
VSD User Name : cna@vsd01.nuage.lab/nuage
Uptime : 0d 02:31:27
Status : available
Nuage Msg Tx. : 8 Nuage Msg Rx. : 8
Nuage Msg Ack. Rx. : 8 Nuage Msg Error : 0
Nuage Msg TimedOut : 0 Nuage Msg MinRtt : 50
Nuage Msg MaxRtt : 60

===============================================================================

Ok guys, next post we’ll install our plugin into a brand new openstack installation

See you around!


Building a Nuage/PackStack Lab at home Part 3

$
0
0

Hi there,

Through this post, I will install nested packstack liberty with a controller/network node and a nova compute node. Then, I will install the Nuage plugin for neutron and the metadata, heat, horizon files. Also I will install our VRS (Virtualized Routing and Switching) replacing the OVS instance.

I’ve done some changes from my last post. I’ve created a couple new flavors: nuage.osc.2 and nuage.nova.2. Reason: I’ve got some issues with the memory capacity into the OpenStack Controller. Since now, replace flavors nuage.osc and nuage.nova with those:


[root@box01 ~(keystone_admin)]# openstack flavor create --ram 10240 --disk 250 --vcpus 4 --public nuage.nova.2
+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 250                                  |
| id                         | 4e191554-25f9-4ce7-bb1b-80941d6ab839 |
| name                       | nuage.nova.2                         |
| os-flavor-access:is_public | True                                 |
| ram                        | 10240                                |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 4                                    |
+----------------------------+--------------------------------------+
[root@box01 ~(keystone_admin)]# openstack flavor create --ram 8192 --disk 50 --vcpus 4 --public nuage.osc.2
+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 50                                   |
| id                         | a98464a5-1008-45bb-972d-7997cc2f0df3 |
| name                       | nuage.osc.2                          |
| os-flavor-access:is_public | True                                 |
| ram                        | 8192                                 |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 4                                    |
+----------------------------+--------------------------------------+

Our new list of instances will be now:

pinrojas - packstack lab nuage new layout

OpenStack Controller

I will install an OpenStack controller/network with the services: neutron, horizon, heat, nova, keystone and glance. And a nova compute server with KVM.

Let’s start creating the server


[root@box01 ~]# . keystonerc_nuage
[root@box01 ~(keystone_nuage)]# openstack server create --image centos7-image  --flavor nuage.osc.2 --key-name pin-laptop --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.6 ocs01
+--------------------------------------+----------------------------------------------------------+
| Field                                | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          |                                                          |
| OS-EXT-STS:power_state               | 0                                                        |
| OS-EXT-STS:task_state                | scheduling                                               |
| OS-EXT-STS:vm_state                  | building                                                 |
| OS-SRV-USG:launched_at               | None                                                     |
| OS-SRV-USG:terminated_at             | None                                                     |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| addresses                            |                                                          |
| adminPass                            | fdqWisumw9tB                                             |
| config_drive                         |                                                          |
| created                              | 2016-05-23T17:15:20Z                                     |
| flavor                               | nuage.osc.2 (a98464a5-1008-45bb-972d-7997cc2f0df3)       |
| hostId                               |                                                          |
| id                                   | 859bfab9-6547-471f-b83f-73b7997a2b7f                     |
| image                                | snap-160519-osc01 (6082c049-a98d-4fa3-87be-241e08ea0b19) |
| key_name                             | pin-laptop                                               |
| name                                 | ocs01                                                    |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| project_id                           | 39e2f35bc10f4047b1ea77f79559807d                         |
| properties                           |                                                          |
| security_groups                      | [{u'name': u'default'}]                                  |
| status                               | BUILD                                                    |
| updated                              | 2016-05-23T17:15:20Z                                     |
| user_id                              | c91cd992e79149209c41416a55a661b1                         |
+--------------------------------------+----------------------------------------------------------+

I will add a floating IP 192.168.1.30 to get access from my home network to our osc01.


openstack ip floating create external_network
openstack ip floating add 192.168.1.30 ocs01

Let’s proceed preparing our controller and installing PackStack now.

OpenStack Controller: disable selinux and update

Let’s disable selinux to save resources.


[root@ocs01 ~]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=DISABLED
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

[root@ocs01 ~]# vi /etc/grub2.cfg

change /etc/grub2.conf and reboot. See an extract of the file over the following:


### BEGIN /etc/grub.d/10_linux ###
menuentry 'CentOS Linux (3.10.0-327.13.1.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-327.13.1.el7.x86_64-advanced-8a9d38ed-14e7-462a-be6c-e385d6b1906d' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod xfs
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1' 8a9d38ed-14e7-462a-be6c-e385d6b1906d
else
search --no-floppy --fs-uuid --set=root 8a9d38ed-14e7-462a-be6c-e385d6b1906d
fi
linux16 /boot/vmlinuz-3.10.0-327.13.1.el7.x86_64 root=UUID=8a9d38ed-14e7-462a-be6c-e385d6b1906d ro console=tty0 console=ttyS0,115200n8 crashkernel=auto console=ttyS0,115200 LANG=en_US.UTF-8 selinux=0

Update your system thru “yum -y update”. Set you timezone (My case is US/Central): sudo ln -s /usr/share/zoneinfo/US/Central /etc/localtime. You may need to delete /etc/localtime first.

OpenStack Controller: Configure NTP Server

Add you jumpbox server into the /etc/ntp.conf file as following (jus showing an extract of the file)


[root@ocs01 ~]# yum -y install ntp
Loaded plugins: fastestmirror
#
# some boring lines
# more boring lines
#
Installed:
ntp.x86_64 0:4.2.6p5-22.el7.centos.1

Dependency Installed:
autogen-libopts.x86_64 0:5.18-5.el7 ntpdate.x86_64 0:4.2.6p5-22.el7.centos.1

Complete!
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server jumpbox.nuage.lab iburst
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

Synchronize time services as the following:


[root@ocs01 ~]# service ntpd stop
Redirecting to /bin/systemctl stop ntpd.service
[root@ocs01 ~]# ntpdate -u jumpbox.nuage.lab
16 May 19:49:30 ntpdate[11914]: adjust time server 192.168.101.3 offset 0.018515 sec
[root@ocs01 ~]# service ntpd start
Redirecting to /bin/systemctl start ntpd.service
[root@ocs01 ~]# ntpstat
synchronised to NTP server (107.161.29.207) at stratum 3
time correct to within 7972 ms
polling server every 64 s

OpenStack Controller: pre-tasks to packstack installation

Install packstack running “yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-liberty/rdo-release-liberty-2.noarch.rpm&#8221; and then “yum install -y openstack-packstack”

I’ve created a snap from it to use it later. take a look the following:

pinrojas - snap packstack install nuage lab 1 pinrojas - snap packstack install nuage lab 2 pinrojas - snap packstack install nuage lab 3

Now use “ssh-keygen” to generate your key pair into the controller:


[root@ocs01 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
f8:d1:79:50:3d:4d:e6:2c:6c:13:e4:86:65:21:0e:c4 root@ocs01.novalocal
The key's randomart image is:
+--[ RSA 2048]----+
| oo oo*+o|
| E+ Bo=.|
| . o B.o|
| . . o o o |
| . S o . |
| . . . |
| . |
| |
| |
+-----------------+
[root@ocs01 ~]# cd .ssh/
[root@ocs01 .ssh]# cat id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDL/k1URNcPeTG3NZJENPloueh/orIiDuzFRfMbgFuUVJrVVoAWHjAsHYu8N3pzDtZAQSxGK7AcpuHjCveNY+kk1cVI5nzmvguHRce8OeGpXxp1AWAVDOia5ipTPEmdOSk+RP496v64bZR2uInZXMaS97SsXwqXULLLtTxWMjS5evdynNCmAsfmJ+Z2mNrE3l2rZcECJj4uKlNhWAhTN7BlO8soPvE+oX+yjfXqOsTZW+Rtz5tg7ZSDOftNR3HVa859dJxqu6FgOhEELOtP/B5T/NAoSMhpR9VcJmJEZA5iQtTSORIdylHnw+kkGg0ks1/j4TfCzFcm8RvcJ4YKSg6H root@ocs01.novalocal

Create a new key-pair for your OpenStack Controller importing the public key as following:

pinrojas - packstack import keypair openstack controller.png

Compute Node

We’ll use our snap from the controller as following (don’t forget use keystone_nuage for credentials). Switch to box01 to create the servers.


[root@box01 ~(keystone_nuage)]# openstack server create --image snap-osc01-160516-packstack-pkg --flavor nuage.nova.2 --key-name osc01-kpair --nic net-id=nuage-lab,v4-fixed-ip=192.168.101.7 nova01
+--------------------------------------+-----------------------------------------------------------+
| Field                                | Value                                                     |
+--------------------------------------+-----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                    |
| OS-EXT-AZ:availability_zone          |                                                           |
| OS-EXT-STS:power_state               | 0                                                         |
| OS-EXT-STS:task_state                | scheduling                                                |
| OS-EXT-STS:vm_state                  | building                                                  |
| OS-SRV-USG:launched_at               | None                                                      |
| OS-SRV-USG:terminated_at             | None                                                      |
| accessIPv4                           |                                                           |
| accessIPv6                           |                                                           |
| addresses                            |                                                           |
| adminPass                            | GTbBa5A6JxzS                                              |
| config_drive                         |                                                           |
| created                              | 2016-05-23T17:23:55Z                                      |
| flavor                               | nuage.nova.2 (4e191554-25f9-4ce7-bb1b-80941d6ab839)       |
| hostId                               |                                                           |
| id                                   | c0f78a72-e304-4292-8620-c0581a9e6aa8                      |
| image                                | snap-160519-nova01 (958f0ed6-b186-4a72-a662-df78c3ab78b8) |
| key_name                             | osc01-kpair                                               |
| name                                 | nova01                                                    |
| os-extended-volumes:volumes_attached | []                                                        |
| progress                             | 0                                                         |
| project_id                           | 39e2f35bc10f4047b1ea77f79559807d                          |
| properties                           |                                                           |
| security_groups                      | [{u'name': u'default'}]                                   |
| status                               | BUILD                                                     |
| updated                              | 2016-05-23T17:23:56Z                                      |
| user_id                              | c91cd992e79149209c41416a55a661b1                          |
+--------------------------------------+-----------------------------------------------------------+

Some minutes later, go back to osc01. Check the connection to nova server from your OpenStack Controller and add the public key in /root/ssh/authorized_host


[root@ocs01 ~]# ssh centos@192.168.101.7
The authenticity of host '192.168.101.7 (192.168.101.7)' can't be established.
ECDSA key fingerprint is aa:31:dd:ab:9a:08:3d:7a:23:93:71:97:e1:fb:15:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.101.7' (ECDSA) to the list of known hosts.
Last login: Mon May 16 19:38:42 2016 from 192.168.1.66
[centos@nova01 ~]$
[centos@nova01 ~]$ sudo vi /root/.ssh/authorized_keys
#
# add public OCS's public key
#
[centos@nova01 ~]$ exit
logout
Connection to 192.168.101.7 closed.
[root@ocs01 ~]# ssh 192.168.101.7
Last login: Tue May 17 18:12:23 2016
[root@nova01 ~]#

IMPORTANT: Add this public key even into /root/.ssh/authorized_keys at ocs01 server
Sync NTP after you get a clean access to nova01 as root user:


[root@nova01 ~]# ntpdate -u jumpbox.nuage.lab
17 May 18:17:38 ntpdate[9205]: adjust time server 192.168.101.3 offset 0.018297 sec
[root@nova01 ~]# service ntpd start
Redirecting to /bin/systemctl start ntpd.service
[root@nova01 ~]# ntpstat
synchronised to NTP server (192.168.101.3) at stratum 4
time correct to within 8139 ms
polling server every 64 s

PackStack Installation: Using answer file to install both servers

Install packstack now from the controller (osc01) changing the compute to nova01 in the answer file. First, create the answer file


[root@ocs01 ~]# packstack --gen-answer-file=/root/answer.txt
[root@ocs01 ~]# vi answer.txt

Change the following parameters:


CONFIG_CONTROLLER_HOST=192.168.101.6
CONFIG_COMPUTE_HOSTS=192.168.101.7
CONFIG_NETWORK_HOSTS=192.168.101.6
CONFIG_PROVISION_DEMO=n
CONFIG_CINDER_INSTALL=n
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_NAGIOS_INSTALL=n
CONFIG_NTP_SERVERS=192.168.101.3

Now, execute “packstack –answer-file=/root/answer.txt”


[root@ocs01 ~]# packstack --answer-file=/root/answer.txt
Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20160517-184422-KxwSmh/openstack-setup.log

Installing:
Clean Up [ DONE ]
Discovering ip protocol version [ DONE ]
Setting up ssh keys [ DONE ]
Preparing servers [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Adding pre install manifest entries [ DONE ]
Installing time synchronization via NTP [ DONE ]
Setting up CACERT [ DONE ]
Adding AMQP manifest entries [ DONE ]
Adding MariaDB manifest entries [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Adding Keystone manifest entries [ DONE ]
Adding Glance Keystone manifest entries [ DONE ]
Adding Glance manifest entries [ DONE ]
Adding Nova API manifest entries [ DONE ]
Adding Nova Keystone manifest entries [ DONE ]
Adding Nova Cert manifest entries [ DONE ]
Adding Nova Conductor manifest entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Gathering ssh host keys for Nova migration [ DONE ]
Adding Nova Compute manifest entries [ DONE ]
Adding Nova Scheduler manifest entries [ DONE ]
Adding Nova VNC Proxy manifest entries [ DONE ]
Adding OpenStack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries [ DONE ]
Adding Neutron VPNaaS Agent manifest entries [ DONE ]
Adding Neutron FWaaS Agent manifest entries [ DONE ]
Adding Neutron LBaaS Agent manifest entries [ DONE ]
Adding Neutron API manifest entries [ DONE ]
Adding Neutron Keystone manifest entries [ DONE ]
Adding Neutron L3 manifest entries [ DONE ]
Adding Neutron L2 Agent manifest entries [ DONE ]
Adding Neutron DHCP Agent manifest entries [ DONE ]
Adding Neutron Metering Agent manifest entries [ DONE ]
Adding Neutron Metadata Agent manifest entries [ DONE ]
Adding Neutron SR-IOV Switch Agent manifest entries [ DONE ]
Checking if NetworkManager is enabled and running [ DONE ]
Adding OpenStack Client manifest entries [ DONE ]
Adding Horizon manifest entries [ DONE ]
Adding post install manifest entries [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.101.6_prescript.pp
Applying 192.168.101.7_prescript.pp
192.168.101.7_prescript.pp: [ DONE ]
192.168.101.6_prescript.pp: [ DONE ]
Applying 192.168.101.6_chrony.pp
Applying 192.168.101.7_chrony.pp
192.168.101.7_chrony.pp: [ DONE ]
192.168.101.6_chrony.pp: [ DONE ]
Applying 192.168.101.6_amqp.pp
Applying 192.168.101.6_mariadb.pp
192.168.101.6_amqp.pp: [ DONE ]
192.168.101.6_mariadb.pp: [ DONE ]
Applying 192.168.101.6_keystone.pp
Applying 192.168.101.6_glance.pp
192.168.101.6_keystone.pp: [ DONE ]
192.168.101.6_glance.pp: [ DONE ]
Applying 192.168.101.6_api_nova.pp
192.168.101.6_api_nova.pp: [ DONE ]
Applying 192.168.101.6_nova.pp
Applying 192.168.101.7_nova.pp
192.168.101.6_nova.pp: [ DONE ]
192.168.101.7_nova.pp: [ DONE ]
Applying 192.168.101.6_neutron.pp
Applying 192.168.101.7_neutron.pp
192.168.101.7_neutron.pp: [ DONE ]
192.168.101.6_neutron.pp: [ DONE ]
Applying 192.168.101.6_osclient.pp
Applying 192.168.101.6_horizon.pp
192.168.101.6_osclient.pp: [ DONE ]
192.168.101.6_horizon.pp: [ DONE ]
Applying 192.168.101.6_postscript.pp
Applying 192.168.101.7_postscript.pp
192.168.101.7_postscript.pp: [ DONE ]
192.168.101.6_postscript.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]

**** Installation completed successfully ******

Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.101.6. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.101.6/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* Because of the kernel update the host 192.168.101.6 requires reboot.
* The installation log file is available at: /var/tmp/packstack/20160517-184422-KxwSmh/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20160517-184422-KxwSmh/manifests

Reboot the controller

OpenStack Controller: Installing Nuage Plugin for Liberty

First, remove Neutron services from controller/network node osc01.


[root@osc01 ~]# systemctl stop neutron-dhcp-agent.service
[root@osc01 ~]# systemctl stop neutron-l3-agent.service
[root@osc01 ~]# systemctl stop neutron-metadata-agent.service
[root@osc01 ~]# systemctl stop neutron-openvswitch-agent.service
[root@osc01 ~]# systemctl stop neutron-netns-cleanup.service
[root@osc01 ~]# systemctl stop neutron-ovs-cleanup.service
[root@osc01 ~]# systemctl disable neutron-dhcp-agent.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service.
[root@osc01 ~]# systemctl disable neutron-l3-agent.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service.
[root@osc01 ~]# systemctl disable neutron-metadata-agent.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service.
[root@osc01 ~]# systemctl disable neutron-openvswitch-agent.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service.
[root@osc01 ~]# systemctl disable neutron-netns-cleanup.service
[root@osc01 ~]# systemctl disable neutron-ovs-cleanup.service
Removed symlink /etc/systemd/system/multi-user.target.wants/neutron-ovs-cleanup.service.

Get the rpm files for openstack liberty (el7) from Nokia’s support site (send me a comment if you need help on that).


[root@osc01 ~]# ls
answer.txt nuage-openstack-heat-5.0.0.1818-nuage.noarch.rpm
keystonerc_admin nuage-openstack-horizon-8.0.0.1818-nuage.noarch.rpm
nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm nuage-openstack-neutron-7.0.0.1818-nuage.noarch.rpm
nuagenetlib-2015.1.3.2.6_228-nuage.noarch.rpm nuage-openstack-neutronclient-3.1.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuagenetlib-2015.1.3.2.6_228-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-openstack-neutron-7.0.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-openstack-neutronclient-3.1.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-openstack-horizon-8.0.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-openstack-heat-5.0.0.1818-nuage.noarch.rpm
[root@osc01 ~]# rpm -i nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm

Configuring Nuage plugin

Modify neutron.conf file using token string from keystone.conf file:


[root@osc01 ~]# mkdir /etc/neutron/plugins/nuage/
[root@osc01 ~]# head -15 /etc/keystone/keystone.conf
[DEFAULT]

#
# From keystone
#

# A "shared secret" that can be used to bootstrap Keystone. This "token" does
# not represent a user, and carries no explicit authorization. To disable in
# production (highly recommended), remove AdminTokenAuthMiddleware from your
# paste application pipelines (for example, in keystone-paste.ini). (string
# value)
#admin_token = ADMIN
admin_token = 05c8a91b388c4956a7e4748d01ceb840

# The base public endpoint URL for Keystone that is advertised to clients
[root@osc01 ~]# vi /etc/neutron/plugins/nuage/nuage_plugin.ini
[root@osc01 ~]# cat /etc/neutron/plugins/nuage/nuage_plugin.ini
[DATABASE]
#connection = mysql://nuage_neutron:password@192.168.101.6/nuage_neutron?charset=utf8

[KEYSTONE]
#keystone_service_endpoint = http://192.168.101.6:35357/v2.0/
#keystone_admin_token = 05c8a91b388c4956a7e4748d01ceb840

[RESTPROXY]
default_net_partition_name = OpenStack_Lab
auth_resource = /me
server = 192.168.101.4:8443
organization = csp
serverauth = csproot:csproot
serverssl = True
base_uri = /nuage/api/v3_0
default_floatingip_quota = 254

Now, Let’s modify /etc/nova/nova.conf. Change the following lines:


use_forwarded_for = True
instance_name_template = inst-%08x
[neutron]
service_metadata_proxy = True
metadata_proxy_shared_secret=NuageNetworksSharedSecret
ovs_bridge=alubr0
security_group_api=neutron

Configure Metadata agent in PackStack controller

Delete current file nuage-metadata-agent and create a new file with the following information:


[root@osc01 ~]# vi /etc/nova/nova.conf
[root@osc01 ~]# rm -rf /etc/default/nuage-metadata-agent
[root@osc01 ~]# vi /etc/default/nuage-metadata-agent
[centos@ocs01 ~]$ cat /etc/default/nuage-metadata-agent
METADATA_PORT=9697
NOVA_METADATA_IP=127.0.0.1
NOVA_METADATA_PORT=8775
METADATA_PROXY_SHARED_SECRET="NuageNetworksSharedSecret"
NOVA_CLIENT_VERSION=2
NOVA_OS_USERNAME=nova
NOVA_OS_PASSWORD=2b12874fcf3c43ff
NOVA_OS_TENANT_NAME=services
NOVA_OS_AUTH_URL=http://192.168.101.6:5000/v2.0
NOVA_REGION_NAME=RegionOne
NUAGE_METADATA_AGENT_START_WITH_OVS=true
NOVA_API_ENDPOINT_TYPE=publicURL

 

Configuring Neutron

Edit/add the following lines to /etc/neutron/neutron.conf. Don’t forget to comment out “service_plugins = router”


api_extensions_path = /usr/lib/python2.7/site-packages/nuage_neutron/plugins/nuage/extensions/
core_plugin = neutron.plugins.nuage.plugin.NuagePlugin

Required installation tasks in PackStack Controller

More changes. copy “nuage-openstack-upgrade-1818.tar.gz” to packstack controller.


[root@osc01 ~]# mkdir /tmp/nuage
[root@osc01 ~]# mkdir /tmp/nuage/upgrade
[root@osc01 ~]# cd /tmp/nuage/upgrade
[root@osc01 upgrade]# mv /root/
 .
[root@osc01 upgrade]# tar -xzf nuage-openstack-upgrade-1818.tar.gz
[root@osc01 upgrade]# python set_and_audit_cms.py --neutron-config-file /etc/neutron/neutron.conf --plugin-config-file /etc/neutron/plugins/nuage/nuage_plugin.ini
WARNING:oslo_config.cfg:Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
INFO:VPort_Sync:Starting Vports Sync.
WARNING:neutron.notifiers.nova:Authenticating to nova using nova_admin_* options is deprecated. This should be done using an auth plugin, like password
WARNING:oslo_config.cfg:Option "nova_region_name" from group "DEFAULT" is deprecated. Use option "region_name" from group "nova".
INFO:VPort_Sync:Vports Sync on VSD is now complete.
INFO:generate_cms_id:created CMS 031b436e-3181-4705-8285-e74816d9f5b9
WARNING:neutron.notifiers.nova:Authenticating to nova using nova_admin_* options is deprecated. This should be done using an auth plugin, like password
WARNING:oslo_config.cfg:Option "nova_region_name" from group "DEFAULT" is deprecated. Use option "region_name" from group "nova".
INFO:Upgrade_Logger:Audit begins.
INFO:Upgrade_Logger:Checking subnets.
INFO:Upgrade_Logger:Subnets done.
INFO:Upgrade_Logger:Checking domains.
INFO:Upgrade_Logger:Domains done.
INFO:Upgrade_Logger:Checking static routes.
INFO:Upgrade_Logger:Static routes done.
INFO:Upgrade_Logger:Checking acl entry templates.
INFO:Upgrade_Logger:Acl entry templates done.
INFO:Upgrade_Logger:Checking policy groups.
INFO:Upgrade_Logger:Policy groups done.
INFO:Upgrade_Logger:Checking floating ips.
INFO:Upgrade_Logger:Floating ips done.
INFO:Upgrade_Logger:Checking vports.
INFO:Upgrade_Logger:Vports done.
INFO:Upgrade_Logger:Checking shared network resources.
INFO:Upgrade_Logger:Shared network resources done.
INFO:Upgrade_Logger:Checking application domains.
INFO:Upgrade_Logger:Application domains done.
INFO:Upgrade_Logger:File "audit.yaml" created.
INFO:Upgrade_Logger:Audit Finished.
INFO:Upgrade_Logger:Processing CMS ID discrepancies in the audit file...
INFO:Upgrade_Logger:Processed all the CMS ID discrepancies in the audit file
[root@osc01 upgrade]# systemctl restart neutron-server
[root@osc01 upgrade]# cd
[root@osc01 ~]# . keystonerc_admin
[root@osc01 ~(keystone_admin)]# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
[root@osc01 ~]# systemctl restart neutron-server
[root@osc01 ~]# rm -rf /etc/neutron/plugin.ini
[root@osc01 ~]# ln -s /etc/neutron/plugins/nuage/nuage_plugin.ini /etc/neutron/plugin.ini
[root@osc01 ~]# neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/nuage/nuage_plugin.ini upgrade head
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade for neutron ...
#
# Some boring lines
# More boring lines
#
INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes
INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations
INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port
OK
[root@osc01 ~]# systemctl restart openstack-nova-api
[root@osc01 ~]# systemctl restart openstack-nova-scheduler
[root@osc01 ~]# systemctl restart openstack-nova-conductor
[root@osc01 ~]# systemctl restart neutron-server

Just, let’s check if we have access to horizon (don’t login yet!).

pinrojas - nuage lab packstack home horizon access.png

Compute Node: Configuring nova.conf and  installing VRS

It’s turn to make same changes to our compute node nova01.


[root@nova01 ~]# rpm -Uvh http://mirror.pnl.gov/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
Retrieving http://mirror.pnl.gov/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
warning: /var/tmp/rpm-tmp.VNThyF: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:epel-release-7-6 ################################# [100%]
[root@nova01 ~]# vi /etc/yum.repos.d/CentOS-Base.repo
[root@nova01 ~]# yum -y update
Loaded plugins: fastestmirror
base | 3.6 kB 00:00:00
centosplus | 3.4 kB 00:00:00
epel/x86_64/metalink | 12 kB 00:00:00
epel | 4.3 kB 00:00:00
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/4): centosplus/7/x86_64/primary_db | 2.3 MB 00:00:00
(2/4): epel/x86_64/updateinfo | 555 kB 00:00:01
(3/4): epel/x86_64/group_gz | 170 kB 00:00:01
(4/4): epel/x86_64/primary_db | 4.1 MB 00:00:00
Loading mirror speeds from cached hostfile
* base: mirror.rackspace.com
* centosplus: pubmirrors.dal.corespace.com
* epel: mirror.compevo.com
* extras: mirror.team-cymru.org
* updates: mirror.steadfast.net
Resolving Dependencies
#
# Boring lines
# more boring lines
#

Installed:
python2-boto.noarch 0:2.39.0-1.el7 python2-crypto.x86_64 0:2.6.1-9.el7 python2-ecdsa.noarch 0:0.13-4.el7 python2-msgpack.x86_64 0:0.4.7-3.el7

Dependency Installed:
libtomcrypt.x86_64 0:1.17-23.el7 libtommath.x86_64 0:0.42.0-4.el7 postgresql-libs.x86_64 0:9.2.15-1.el7_2 python2-rsa.noarch 0:3.4.1-1.el7

Updated:
hiera.noarch 1:1.3.4-5.el7 libndp.x86_64 0:1.2-6.el7_2 postfix.x86_64 2:2.10.1-6.0.1.el7.centos
python-contextlib2.noarch 0:0.5.1-1.el7 python-mimeparse.noarch 0:0.1.4-2.el7 python-perf.x86_64 0:3.10.0-327.18.2.el7.centos.plus
python-psutil.x86_64 0:2.2.1-1.el7 python-pygments.noarch 0:2.0.2-4.el7 python-qpid.noarch 0:0.32-13.el7
python-qpid-common.noarch 0:0.32-13.el7 python-requests.noarch 0:2.9.1-2.el7 python-unicodecsv.noarch 0:0.14.1-4.el7
python-unittest2.noarch 0:1.1.0-4.el7 python-urllib3.noarch 0:1.13.1-3.el7 python2-eventlet.noarch 0:0.18.4-1.el7

Replaced:
python-boto.noarch 0:2.25.0-2.el7.centos python-crypto.x86_64 0:2.6.1-1.el7.centos python-ecdsa.noarch 0:0.11-3.el7.centos
python-msgpack.x86_64 0:0.4.6-3.el7

Complete!

 

Nova/KVM: solving dependencies

Solve some dependencies in KVM.


[root@nova01 ~]# yum install libvirt -y
#
# Boring lines
#
Installed:
libvirt.x86_64 0:1.2.17-13.el7_2.4

Dependency Installed:
libvirt-daemon-config-network.x86_64 0:1.2.17-13.el7_2.4 libvirt-daemon-driver-lxc.x86_64 0:1.2.17-13.el7_2.4

Complete!
[root@nova01 ~]# yum install python-twisted-core -y
#
# Boring lines
#

Installed:
python-twisted.x86_64 0:15.4.0-3.el7

Dependency Installed:
libXft.x86_64 0:2.3.2-2.el7 libXrender.x86_64 0:0.9.8-2.1.el7 pyserial.noarch 0:2.6-5.el7
python-characteristic.noarch 0:14.3.0-4.el7 python-service-identity.noarch 0:14.0.0-4.el7 python-zope-interface.x86_64 0:4.0.5-4.el7
python2-pyasn1-modules.noarch 0:0.1.9-6.el7.1 tcl.x86_64 1:8.5.13-8.el7 tix.x86_64 1:8.4.3-12.el7
tk.x86_64 1:8.5.13-6.el7 tkinter.x86_64 0:2.7.5-34.el7

Complete!
[root@nova01 ~]# yum install perl-JSON -y
#
# Boring lines
#

Installed:
perl-JSON.noarch 0:2.59-2.el7

Complete!
[root@nova01 ~]# yum install vconfig -y
#
# Boring lines
#

Installed:
vconfig.x86_64 0:1.9-16.el7

Complete!

Installing Nuage VRS

We’ll install VRS into the nova node and replace OVS instance.


[root@nova01 ~]# cd /tmp/nuage/
[root@nova01 nuage]# mv /root/nuage-openvswitch-* .
[root@nova01 nuage]# yum -y remove openvswitch
#
# Some boring lines
# More boring lines
#
Removed:
openvswitch.x86_64 0:2.4.0-1.el7

Dependency Removed:
openstack-neutron-openvswitch.noarch 1:7.0.4-1.el7

Complete!
[root@nova01 nuage]# yum -y remove python-openvswitch
#
# Some boring lines
# More boring lines
#

Removed:
python-openvswitch.noarch 0:2.4.0-1.el7

Complete!
[root@nova01 nuage]# yum -y install nuage-openvswitch-3.2.6-232.el7.x86_64.rpm
#
# Some boring lines
# More boring lines
#
Installed:
nuage-openvswitch.x86_64 0:3.2.6-232.el7

Dependency Installed:
perl-Sys-Syslog.x86_64 0:0.33-3.el7 protobuf-c.x86_64 0:1.0.2-2.el7 python-setproctitle.x86_64 0:1.1.6-5.el7

Complete!
[root@nova01 nuage]# vi /etc/default/openvswitch
[root@nova01 nuage]# cat /etc/default/openvswitch | grep 101.5
ACTIVE_CONTROLLER=192.168.101.5
[root@nova01 nuage]# mv /root/nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm .
[root@nova01 nuage]# rpm -i nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm
[root@nova01 ~]# cd /tmp/nuage/
[root@nova01 nuage]# mv /root/nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm .
[root@nova01 nuage]# rpm -i nuage-metadata-agent-3.2.6-232.el7.x86_64.rpm
[root@nova01 nuage]# vi /etc/nova/nova.conf

Configure nova.conf

We’ll modify /etc/nova/nova.conf as following:


ovs_bridge=alubr0
service_metadata_proxy=true
metadata_proxy_shared_secret=NuageNetworksSharedSecret
use_forwarded_for=true
instance_name_template=instance-%08x

Restart services as following:


[root@nova01 nuage]# systemctl restart openstack-nova-compute
[root@nova01 nuage]# systemctl restart openvswitch

Checking service status and connections.


[root@nova01 ~]# systemctl status openvswitch
● openvswitch.service - Nuage Openvswitch
   Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; vendor preset: disabled)
   Active: active (exited) since Mon 2016-05-23 12:26:19 CDT; 9h ago
 Main PID: 508 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/openvswitch.service
           ├─ 601 ovsdb-server: monitoring pid 602 (healthy)
           ├─ 602 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:warn --remote=punix:/var/run/openvswitch/db.sock --private-key=db:O...
           ├─ 694 ovs-vswitchd: monitoring pid 695 (healthy)
           ├─ 695 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:warn --mlockall --no-chdir --log-file=/var/log/openvswitch...
           ├─1069 nuage-SysMon: monitoring pid 1070 healthy
           ├─1070 /usr/bin/python /sbin/nuage-SysMon -vany:console:emer -vany:syslog:err -vany:file:info --no-chdir --log-file=/var/log/openvswitch/nuage-SysM...
           ├─1121 monitor(vm-monitor): vm-monitor: monitoring pid 1122 (healthy)
           ├─1122 vm-monitor --no-chdir --log-file=/var/log/openvswitch/vm-monitor.log --pidfile=/var/run/openvswitch/vm-monitor.pid --detach --monitor
           ├─1144 nuage-rpc: monitoring pid 1145 (healthy)
           └─1145 nuage-rpc unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --tcp 7406 --ssl 7407 --no-chdir --log-file=/var/log/ope...

May 23 12:26:13 nova01.novalocal openvswitch.init[508]: iptables: No chain/target/match by that name.
May 23 12:26:13 nova01.novalocal openvswitch.init[508]: iptables: No chain/target/match by that name.
May 23 12:26:13 nova01.novalocal openvswitch.init[508]: iptables: Bad rule (does a matching rule exist in that chain?).
May 23 12:26:16 nova01.novalocal openvswitch.init[508]: Starting nuage system monitor:Starting nuage-SysMon[  OK  ]
May 23 12:26:19 nova01.novalocal openvswitch.init[508]: Starting vm-monitor:Starting vm-monitor:Starting vm-monitor[  OK  ]
May 23 12:26:19 nova01.novalocal openvswitch.init[508]: Starting nuage rpc server:Starting nuage-rpc[  OK  ]
May 23 12:26:19 nova01.novalocal systemd[1]: Started Nuage Openvswitch.
May 23 12:26:20 nova01.novalocal ovs-vsctl[1154]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait --timeout=5 set Open_vSwitch . other_config:acl-...-port=514
May 23 12:26:22 nova01.novalocal ovs-vsctl[1185]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait --timeout=5 set Open_vSwitch . other_config:stat...1.4:39090
May 23 12:29:24 nova01 systemd[1]: [/usr/lib/systemd/system/openvswitch.service:10] Unknown lvalue 'ExecRestart' in section 'Service'
Hint: Some lines were ellipsized, use -l to show in full.
[root@nova01 ~]# ovs-vsctl show
2df2c5a3-5f96-4186-bf54-4836d73e3b39
    Bridge "alubr0"
        Controller "ctrl1"
            target: "tcp:192.168.101.5:6633"
            role: master
            is_connected: true
        Port "svc-rl-tap1"
            Interface "svc-rl-tap1"
        Port "svc-rl-tap2"
            Interface "svc-rl-tap2"
        Port svc-pat-tap
            Interface svc-pat-tap
                type: internal
        Port "alubr0"
            Interface "alubr0"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a86506"
            Interface "vxlan-c0a86506"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.101.7", out_key=flow, remote_ip="192.168.101.6"}
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "3.2.6-232-nuage"

The next image will show you what you will get into the console

pinrojas - nuage packstack console monitoring openstack demo lab.png

And we are done with our lab. Thanks very much for reading!
see you.


Using CentOS Cloud Images with Virsh and Nuage Metadata

$
0
0

The following tools will help you to create virsh domains from CentOS Cloud images (those images can be downloaded from http://cloud.centos.org/centos/7/images). The script was based on Giovanni’s Post: Create a Linux Lab on KVM Using Cloud Images.

I am adding Nuage metadata into the XML to connect this domain to our VRS. After this process you may manage connectivity, ACLs, forwarding from the Nuage VSD console.

I am not bringing details how to install VRS into the KVM node. This is something that has already happened and is up and running. Check out my previous posts about how to do that. Also, libvirtd is up and running with no issues. You may find tons of notes how to install KVM over internet.

Prepare scripts, folders and templates

First of all, you will have to create the IMAGE dir. This case I am using “/root/virt/images”. Inside this folder you’ll copy your Centos7 and ISO images.
Download you Centos Cloud QCOW2 Image as you wish to your user folder (i.e. you may use wget or curl -i ). I’m using centos7-cloud-reverse-root.qcow2 in my case.

cloud-init will take the parameters from a cdrom image. We’ll create this cdrom iso over the script and set parameters like hostname and public key for the ssh connection to user “centos”. Then, you will need to get the string from your id_rsa.pub file in your .ssh folder.

Also, you have to create your script “virsh_create.sh” and bring the permissions to execute it (i.e. chmod 755 virsh_create.sh). And also copy the tmp.xml file to “/root/virt/images”.

You will need to create a small python program call macgen.py as follow:

#!/usr/bin/python
# macgen.py script to generate a MAC address for guests on Xen
#
import random
#
def randomMAC():
	mac = [ 0x00, 0x16, 0x3e,
		random.randint(0x00, 0x7f),
		random.randint(0x00, 0xff),
		random.randint(0x00, 0xff) ]
	return ':'.join(map(lambda x: "%02x" % x, mac))
#
print randomMAC()

virsh_create script file

Create your domain and get it connected to our Nuage VRS using the following script.

#!/bin/bash

# Take one argument from the commandline: VM name
if ! [ $# -eq 6 ]; then
    echo "Usage: $0 <node-name> <nuage-enterprise> <nuage-user> <nuage-dom> <nuage-zone> <nuage-subnet>"
    exit 1
fi

# Check if domain already exists
virsh dominfo $1 > /dev/null 2>&1
if [ "$?" -eq 0 ]; then
    echo -n "[WARNING] $1 already exists.  "
    read -p "Do you want to overwrite $1 [y/N]? " -r
    if [[ $REPLY =~ ^[Yy]$ ]]; then
        echo ""
    else
        echo -e "\nNot overwriting $1. Exiting..."
        exit 1
    fi
fi

# Directory to store images
DIR=~/virt/images

# Location of cloud image
IMAGE=$DIR/centos7-cloud-reverse-root.qcow2

# Amount of RAM in MB
MEM=1262144

# Number of virtual CPUs
CPUS=1

# Cloud init files
USER_DATA=user-data
META_DATA=meta-data
CI_ISO=$1-cidata.iso
DISK=$1.qcow2
MAC=`~/macgen.py`
UUID=`uuidgen`
TAP="tap-${UUID:0:11}"

# Start clean
rm -rf $DIR/$1
mkdir -p $DIR/$1

pushd $DIR/$1 > /dev/null

    # Create log file
    touch $1.log

    echo "$(date -R) Destroying the $1 domain (if it exists)..."

    # Remove domain with the same name
    virsh destroy $1 >> $1.log 2>&1
    virsh undefine $1 >> $1.log 2>&1
    rm  ./$1.xml >> $1.log 2>&1

    # cloud-init config: set hostname, remove cloud-init package,
    # and add ssh-key
    cat > $USER_DATA << _EOF_
#cloud-config

# Hostname management
preserve_hostname: False
hostname: $1
fqdn: $1.example.local

# Remove cloud-init when finished with it
runcmd:
  - [ yum, -y, remove, cloud-init ]

# Configure where output will go
output:
  all: ">> /var/log/cloud-init.log"

# configure interaction with ssh server
ssh_svcname: ssh
ssh_deletekeys: True
ssh_genkeytypes: ['rsa', 'ecdsa']

# Install my public ssh key to the first user-defined user configured
# in cloud.cfg in the template (which is centos for CentOS cloud images)
ssh_authorized_keys:
  - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBil2QzORhDcnKiVVNpO5daOSYVp8nshcIc7aTEkdlqCRir2Oni8BEStK7x7bvh0jrp9KptlHPeos87fQs//VXEb1FEprL2c6fPWmVdtjmYw3yzSkaFKMksL7FdUoEiwF6t8pQAg2mU0Qj9emSHBKg5ttdGqNoSvXc92k7iOzgauda7jdNak+Dx9dPhR3FJwHMcZSlQHO4cweZcK63bZitxlFkJ/FJdry/TBirDhRcXslOJ3ECU2xiyRXJVPs3VNLjMdOTTAoMmZj+GraUBbQ9VIqe683xe02sM83th5hj2C4gW3qXUoFkNLfKAMRxXLRMEwI3ABFB/AAUhACxyTJp giovanni@throwaway
_EOF_

    echo "instance-id: $1; local-hostname: $1" > $META_DATA

    echo "$(date -R) Copying template image..."
    cp $IMAGE $DISK

    # Create CD-ROM ISO with cloud-init config
    echo "$(date -R) Generating ISO for cloud-init..."
    genisoimage -output $CI_ISO -volid cidata -joliet -r $USER_DATA $META_DATA &>> $1.log

    echo "$(date -R) Installing the domain and adjusting the configuration..."
    echo "[INFO] Installing with the following parameters:"
    echo "Instance Name: ${1}"
    echo "Nuage Enterprise: ${2}"
    echo "Nuage Domain: ${4}"
    echo "Nuage Zone: ${5}"
    echo "Nuage Subnet: ${6}"
    echo "Nuage Enterprise: ${2}"
    echo "Instance MAC Address: ${MAC}"
    echo "Instance UUID Net Port: ${UUID}"
    echo "Instance tap interface: ${TAP}"
    echo "Instance Memory: ${MEM}"
    echo "Instance CPUs: ${CPU}"


    cat $DIR/tmp.xml | sed "s/%%name%%/${1}/" | sed "s/%%nuage_dom%%/${4}/" | sed "s/%%nuage_enterprise%%/${2}/" | sed "s/%%nuage_subnet%%/${6}/" |  \
    sed "s/%%nuage_user%%/${3}/" | sed "s/%%nuage_zone%%/${5}/" | sed "s/%%memory%%/${MEM}/" | sed "s/%%cpu%%/${CPUS}/" | sed "s/%%uuid%%/${UUID}/" |   \
    sed "s/%%tap-id%%/${TAP}/" | sed "s/%%cdrom%%/${1}\/${CI_ISO}/" | sed "s/%%image%%/${1}\/${DISK}/" | sed "s/%%mac%%/${MAC}/" > $1.xml

    virsh define $1.xml
    virsh start $1
#    virsh console $1

exit

You may run this as follow:

./virsh_create.sh test-01 "ACME Corp" mau dom2 zone0 subnet1

You may add the following lines to remove the media after the domain creation. I didn’t test it yet. It’s your call.

    sleep 30

    # Eject cdrom
    echo "$(date -R) Cleaning up cloud-init..."
    virsh change-media $1 hda --eject --config >> $1.log

    #Remove the unnecessary cloud init files
    rm $USER_DATA $CI_ISO

    echo "$(date -R) DONE. SSH to $1 using $IP, with  username 'centos'."

popd > /dev/null

tmp.xml: XML temporal file

This file have to be copied into IMAGE folder.

<domain type='kvm' >
  <name>%%name%%</name>
  <metadata>
    <nuage xmlns="http://www.nuagenetworks.net/2013/Vm/Metadata">
      <user name="%%nuage_user%%"/>
      <enterprise name="%%nuage_enterprise%%"/>
      <nuage_network domain="%%nuage_dom%%" type="ipv4" name="%%nuage_subnet%%" zone="%%nuage_zone%%">
        <interface mac="%%mac%%"/>
      </nuage_network>
    </nuage>
  </metadata>
  <memory unit='KiB'>%%memory%%</memory>
  <currentMemory unit='KiB'>%%memory%%</currentMemory>
  <vcpu placement='static'>%%cpu%%</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>qemu64</model>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/root/virt/images/%%image%%'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/root/virt/images/%%cdrom%%'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb0'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb0'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb0'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='%%mac%%'/>
      <source bridge='alubr0'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='%%uuid%%'/>
      </virtualport>
      <target dev='%%tap-id%%'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/%%name%%.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
      <image compression='off'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <redirdev bus='usb' type='spicevmc'>
      <alias name='redir0'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <alias name='redir1'/>
    </redirdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
  </devices>
</domain>

See you around!



OpenStack liberty- Remove elements like duplicated hypervisors or unwanted ports

$
0
0

When your are experimenting with openstack, It’s usual have failures. This post shows how to remove these unwanted ports or duplicated compute nodes.

Removing Duplicated Compute Nodes

I’ve used this trick several times. Cause my Nested OpenStack Nuage lab (and my several installations). I had to remove duplicated nova computes using this procedure.

First, let’s check out our hypervisors.


[root@ocs01 ~]# . keystonerc_admin
[root@ocs01 ~(keystone_admin)]# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | nova01              | down  | enabled |
| 3  | nova01              | up    | enabled |
+----+---------------------+-------+---------+

Now, We’ll check out our database and see what we have:


[root@ocs01 ~(keystone_admin)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 17
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use nova
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [nova]> ;
ERROR: No query specified

MariaDB [nova]> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes;
+----+---------------------+---------------------+---------------------+
| id | created_at          | updated_at          | hypervisor_hostname |
+----+---------------------+---------------------+---------------------+
|  1 | 2016-05-19 14:23:52 | 2016-05-19 19:16:56 | nova01              |
|  2 | 2016-05-19 19:30:00 | 2016-05-19 20:52:29 | nova01.novalocal    |
|  3 | 2016-05-23 17:27:07 | 2016-05-23 18:15:51 | nova01              |
+----+---------------------+---------------------+---------------------+
3 rows in set (0.00 sec)
MariaDB [nova]> exit
Bye

Let’s check the service list.


[root@ocs01 ~(keystone_admin)]# nova service-list
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host             | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 2  | nova-scheduler   | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 3  | nova-conductor   | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:54.000000 | -               |
| 4  | nova-cert        | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 5  | nova-compute     | nova01           | nova     | enabled | down  | 2016-05-19T19:17:52.000000 | -               |
| 6  | nova-cert        | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:16:53.000000 | -               |
| 7  | nova-conductor   | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:16:53.000000 | -               |
| 8  | nova-consoleauth | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:16:53.000000 | -               |
| 9  | nova-scheduler   | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:16:53.000000 | -               |
| 10 | nova-compute     | nova01.novalocal | nova     | enabled | up    | 2016-05-23T18:17:01.000000 | -               |
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+

We’ll remove the hypervisor from compute_nodes and services tables as following:


[root@ocs01 ~(keystone_admin)]# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 18
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use nova;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

MariaDB [nova]> DELETE FROM compute_nodes WHERE id='1';
Query OK, 1 row affected (0.05 sec)

MariaDB [nova]> DELETE FROM compute_nodes WHERE id='2';
Query OK, 1 row affected (0.07 sec)

MariaDB [nova]> DELETE FROM services WHERE host='nova01';
Query OK, 1 row affected (0.01 sec)

MariaDB [nova]> exit
Bye

Let’s check if we’ve got this fixed.


[root@ocs01 ~(keystone_admin)]# nova service-list
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host             | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 2  | nova-scheduler   | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 3  | nova-conductor   | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:54.000000 | -               |
| 4  | nova-cert        | osc01.nuage.lab  | internal | enabled | down  | 2016-05-19T19:17:46.000000 | -               |
| 6  | nova-cert        | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:19:43.000000 | -               |
| 7  | nova-conductor   | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:19:43.000000 | -               |
| 8  | nova-consoleauth | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:19:43.000000 | -               |
| 9  | nova-scheduler   | ocs01.novalocal  | internal | enabled | up    | 2016-05-23T18:19:43.000000 | -               |
| 10 | nova-compute     | nova01.novalocal | nova     | enabled | up    | 2016-05-23T18:19:41.000000 | -               |
+----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
[root@ocs01 ~(keystone_admin)]# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 3  | nova01              | up    | enabled |
+----+---------------------+-------+---------+

Removing unwanted ports

You can have issues with your vports sometimes. Well, happened to me when I’ve got issues with the configuration of my Nuage plugin. After you fixed, you will have issues with some ports, and you will have to remove them from VSD and also from neutron database.

Here we have a way to do it from neutron. Let’s check what port we need to remove:


[root@ocs01 neutron(keystone_chain)]# neutron port-list
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                         |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
| 10c38a65-954c-4b74-92d3-83a2fc63306a |      | fa:16:3e:ec:62:41 | {"subnet_id": "889bba29-bcb1-4e0a-9219-0785e76c95bb", "ip_address": "10.31.31.2"} |
| 538479f2-e715-4687-aa88-b4c7626015ea |      | fa:16:3e:f9:e2:7c | {"subnet_id": "889bba29-bcb1-4e0a-9219-0785e76c95bb", "ip_address": "10.31.31.3"} |
| 70466c99-8abd-4ed9-9fcc-2800d4417698 |      | fa:16:3e:78:7a:eb | {"subnet_id": "9d80cebb-cb07-436e-8620-a8277a30ce4a", "ip_address": "10.41.41.2"} |
| 842ae886-2ade-466a-9e1d-3321f26928b0 |      | fa:16:3e:f9:d7:97 | {"subnet_id": "9d80cebb-cb07-436e-8620-a8277a30ce4a", "ip_address": "10.41.41.1"} |
| 8dd2d824-eb70-46c9-b3fa-494aec382bd8 |      | fa:16:3e:1c:01:a7 | {"subnet_id": "889bba29-bcb1-4e0a-9219-0785e76c95bb", "ip_address": "10.31.31.1"} |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+

Now, let’s go to the neutron database and remove these unwanted ports.


[root@ocs01 neutron(keystone_chain)]# mysql -u root 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 600
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use neutron
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [neutron]> delete from ports where id='10c38a65-954c-4b74-92d3-83a2fc63306a';
Query OK, 1 row affected (0.09 sec)

MariaDB [neutron]> delete from ports where id='538479f2-e715-4687-aa88-b4c7626015ea';
Query OK, 1 row affected (0.07 sec)

MariaDB [neutron]> delete from ports where id='70466c99-8abd-4ed9-9fcc-2800d4417698';
Query OK, 1 row affected (0.00 sec)

Send me any comment if you struggle.
See you.


Useful Nuage VRS openvswitch and VSC commands

$
0
0

Check out what is happening actually behind among the VRSs or VSCs is important sometimes. Here you have some useful commands that will help you out.

Useful Nuage VRS (OpenVSwitch) Commands

Check out Nuage VRS ports and VXLAN tunnels information

[root@compute02 ~]# ovs-dpctl show
system@ovs-system:
	lookups: hit:48 missed:37 lost:0
	flows: 0
	masks: hit:112 total:0 hit/pkt:1.32
	port 0: ovs-system (internal)
	port 1: svc-pat-tap (internal)
	port 2: alubr0 (internal)
	port 3: svc-rl-tap2
	port 4: svc-rl-tap1
	port 5: eth-lxc-14546
	port 6: vxlan_sys_4789 (vxlan: df_default=false, ttl=0)
	port 7: eth-lxc-14666
	port 8: eth-lxc-14775
	port 9: eth-lxc-14897
	port 10: eth-lxc-15034
	port 11: eth-lxc-15164
	port 12: eth-lxc-15281
	port 13: eth-lxc-15406

To get more details try this other one

[root@compute02 ~]# ovs-appctl dpif/show
system@ovs-system: hit:48 missed:37
	alubr0:
		alubr0 65534/2: (internal)
		eltep-b2e041 6/6: (vxlan: df_default=false, in_key=11722817, local_ip=10.0.0.12)
		eth-lxc-14546 4/5: (system)
		eth-lxc-14666 9/7: (system)
		eth-lxc-14775 10/8: (system)
		eth-lxc-14897 11/9: (system)
		eth-lxc-15034 12/10: (system)
		eth-lxc-15164 13/11: (system)
		eth-lxc-15281 14/12: (system)
		eth-lxc-15406 15/13: (system)
		svc-pat-tap 1/1: (internal)
		svc-rl-tap1 3/4: (system)
		svc-rl-tap2 2/3: (system)
		ta00b8329de 7/6: (vxlan: df_default=false, key=8595934, remote_ip=10.0.0.11)
		ta00bb2e041 8/6: (vxlan: df_default=false, key=11722817, remote_ip=10.0.0.11)
		vltep-8329de 5/6: (vxlan: df_default=false, in_key=8595934, local_ip=10.0.0.12)

Find out the VRFS service id for our bridge alubr0 into the VRS

[root@compute02 ~]# ovs-appctl vrf/list alubr0
vrfs: 20029

Get the route table from this VRFS service id. We used the id 20029 that we’ve just got for instance.

[root@compute02 ~]# ovs-appctl vrf/route-table 20029
-----------------+----------+--------+------------+------------+-------------------------------
          Routes | Duration | Cookie |  Pkt Count |  Pkt Bytes |  EVPN-Id or Local/remote Out port
-----------------+----------+--------+------------+------------+-------------------------------
    10.37.120.66 |    1357s |    0x6 |          0 |          0 | 20030
    10.37.129.42 |    1369s |    0x6 |          0 |          0 | 20030
   10.37.159.184 |    1353s |    0x6 |          0 |          0 | 20030
   10.37.133.221 |    1362s |    0x6 |          0 |          0 | 20030
    10.37.76.239 |    1366s |    0x6 |          0 |          0 | 20030
   10.37.100.116 |    1380s |    0x6 |          0 |          0 | 20030
   10.37.100.116 |    1380s |    0x6 |          0 |          0 |     10.37.82.136 |    1376s |    0x6 |          0 |          0 | 20030
    10.37.186.42 |    1373s |    0x6 |          0 |          0 | 20030
     10.37.62.63 |    1380s |    0x6 |          0 |          0 | 20030
     10.37.62.63 |    1380s |    0x6 |          0 |          0 |     10.37.234.60 |    1380s |    0x6 |          0 |          0 | 20030
    10.37.36.162 |    1380s |    0x6 |          0 |          0 | 20030
    10.37.36.162 |    1380s |    0x6 |          0 |          0 |     10.37.0.0/16 |     119s |    0x6 |          0 |          0 | 20030
       0.0.0.0/0 |    1380s |    0x6 |          0 |          0 | -----------------+----------+--------+------------+------------+-------------------------------

Get the MAC table for the associated EVPN id service.

[root@compute02 ~]# ovs-appctl evpn/mac-table 20030


evpn_id: 20030	gen_id: 0x6	vni_id: 0xb2e041	ref_cnt: 10	ltep_port: 6
mode: L3_MODE	arp_proxy: DISABLED	aging_period: 300
pat_enabled: DISABLED	default_action: drop	dhcp_enabled: ENABLED	dhcp_relay: DISABLED	dhcp_pool: DISABLED
resiliency: DISABLED 	l2_encryption:DISABLED
subnet: 10.37.0.0	mask: 255.255.0.0	gw: 10.37.0.1	gw_mac: 68:54:ed:00:00:01

dhcp servers: mac_count: 11	cookie: 455606272

------------------+------+----------+----------+--------+------------+------------+-------------
              Mac | Port | Duration |   Expiry | Cookie |  Pkt Count |  Pkt Bytes |  VM Port name
------------------+------+----------+----------+--------+------------+------------+-------------
ff:ff:ff:ff:ff:ff |    - |    1486s |       0s |    0x6 |          0 |          0 | flood
02:ff:1e:3f:70:09 |    8 |    1486s |       0s |    0x6 |          0 |          0 | Vxlan: key=11722817 remote_ip=10.0.0.11
f2:87:87:aa:3b:a4 |   14 |    1463s |       0s |    0x6 |          0 |          0 | eth-lxc-15281 (grave_euler)
7e:ff:1e:10:18:55 |    8 |    1486s |       0s |    0x6 |          0 |          0 | Vxlan: key=11722817 remote_ip=10.0.0.11
66:87:87:06:c5:4d |   12 |    1472s |       0s |    0x6 |          0 |          0 | eth-lxc-15034 (desperate_archimedes)
4a:87:87:1f:75:d6 |    4 |    1486s |       0s |    0x6 |          0 |          0 | eth-lxc-14546 (suspicious_mirzakhani)
6e:87:87:ca:8d:40 |   11 |    1475s |       0s |    0x6 |          0 |          0 | eth-lxc-14897 (gloomy_liskov)
a2:87:87:37:d6:68 |   13 |    1468s |       0s |    0x6 |          0 |          0 | eth-lxc-15164 (modest_keller)
36:87:87:71:5b:9e |    9 |    1482s |       0s |    0x6 |          0 |          0 | eth-lxc-14666 (hopeful_nobel)
c2:ff:1e:82:1c:e9 |    8 |    1486s |       0s |    0x6 |          0 |          0 | Vxlan: key=11722817 remote_ip=10.0.0.11
16:87:87:5e:42:dc |   10 |    1478s |       0s |    0x6 |          0 |          0 | eth-lxc-14775 (backstabbing_thompson)
76:87:87:1e:c2:6b |   15 |    1459s |       0s |    0x6 |          0 |          0 | eth-lxc-15406 (fervent_goldstine)
------------------+------+----------+----------+--------+------------+------------+-------------

Useful Nuage VSC Commands

Check out what vswicthes (VRSs) are being managed by VSC

*A:vsc01# show vswitch-controller vswitches detail

===============================================================================
Virtual Switch Table
===============================================================================
vswitch-instance           : va-10.0.0.4/1
Personality                : VRS_G
Uptime                     : 2d 12:00:26    VM Count                 : 0
Num of hostIf              : 0              Num of bridgeIf          : 1
Num of multiVMs            : 0
OF version                 : 1              OF nego. version         : 1
OF Conn. port              : 6633
Cntrl. role                : primary        Cntrl. Conn. type        : none
Cntrl. crl lookup          : false
Cntrl. Conn. mode          : secure
Cntrl. Conn. state         : ready
Cntrl. client verification : false
Cntrl. client IP verification : false
Peer IP for resiliency     : -
Received Role from VRS_G   : master         Elected Role for VRS_G   : master
Gateway Hold Time(sec)     : 3              Gateway Echo Time(sec)   : 1
Gateway Topic              : nuage_gateway_id_10.0.0.4
Gateway Retry/Audit Time   : 915
XMPP error code            : 0
XMPP error text            : (Not Specified)
JSON Conn. State           : Up
JSON Sess. Uptime          : 2d 11:59:57
Static Peer                : False
XMPP Tls Profile           : n/a
OF Tls Profile             : n/a
Ovsdb Tls Profile          : n/a
Ovsdb Conn Type            : none

vswitch-instance           : va-10.0.0.11/1
Personality                : VRS
Uptime                     : 1d 03:17:15    VM Count                 : 17
Num of hostIf              : 0              Num of bridgeIf          : 0
Num of multiVMs            : 0
OF version                 : 1              OF nego. version         : 1
OF Conn. port              : 6633
Cntrl. role                : primary        Cntrl. Conn. type        : none
Cntrl. crl lookup          : false
Cntrl. Conn. mode          : secure
Cntrl. Conn. state         : ready
Cntrl. client verification : false
Cntrl. client IP verification : false
Hold Time(sec)             : 15             Echo Time(sec)           : 5
JSON Conn. State           : Up
JSON Sess. Uptime          : 1d 03:17:13
Static Peer                : False
XMPP Tls Profile           : n/a
OF Tls Profile             : n/a
Ovsdb Tls Profile          : n/a
Ovsdb Conn Type            : none

vswitch-instance           : va-10.0.0.12/1
Personality                : VRS
Uptime                     : 2d 12:00:44    VM Count                 : 8
Num of hostIf              : 0              Num of bridgeIf          : 0
Num of multiVMs            : 0
OF version                 : 1              OF nego. version         : 1
OF Conn. port              : 6633
Cntrl. role                : primary        Cntrl. Conn. type        : none
Cntrl. crl lookup          : false
Cntrl. Conn. mode          : secure
Cntrl. Conn. state         : ready
Cntrl. client verification : false
Cntrl. client IP verification : false
Hold Time(sec)             : 15             Echo Time(sec)           : 5
JSON Conn. State           : Up
JSON Sess. Uptime          : 2d 12:00:28
Static Peer                : False
XMPP Tls Profile           : n/a
OF Tls Profile             : n/a
Ovsdb Tls Profile          : n/a
Ovsdb Conn Type            : none

-------------------------------------------------------------------------------
No. virtual switches: 3
===============================================================================

Check out what virtual instances are being managed for an specific Enterprise like “ACME Corp”. We are showing containers (docker) names for instance.

*A:vsc01# show vswitch-controller virtual-machines enterprise "ACME Corp"

===============================================================================
Virtual Machine Table
===============================================================================
vswitch-instance        VM Name          UUID
-------------------------------------------------------------------------------
va-10.0.0.11/1          tender_meitner   35c5fcc9-11f1-b809-19ae-6d0167702e2c
va-10.0.0.11/1          hungry_mclean    497c0ee3-0696-fe66-35f7-cad3ecebc72b
va-10.0.0.11/1          boring_ardinghe* 57e8e917-974f-2b63-ba35-72d7e3752f01
va-10.0.0.11/1          prickly_northcu* 65e57580-a51b-58fc-b783-1744b2dc477d
va-10.0.0.11/1          berserk_visvesv* 732e4e99-d689-146d-fd51-681d8b80946a
va-10.0.0.11/1          trusting_keller  76850489-155d-b55d-dda6-dc8f2d729956
va-10.0.0.11/1          sick_leakey      77e628c9-3939-f620-51d3-35cacbd90f5c
va-10.0.0.11/1          sleepy_roentgen  80142e10-dc97-071a-6207-28e33b2a2166
va-10.0.0.11/1          modest_chandras* 8646cf04-9d24-f317-ebcf-7901c2b4590f
va-10.0.0.11/1          gigantic_wescoff 8d14224d-eaed-a50c-182c-8a5154e96516
va-10.0.0.11/1          prickly_mahavira 91233ace-7512-fa0b-f674-da68ba71c470
va-10.0.0.11/1          jovial_franklin  9258a63d-05f0-bf04-babe-ee107b97e961
va-10.0.0.11/1          lonely_keller    93446d20-3343-4c22-e6d4-e68f91d15818
va-10.0.0.11/1          admiring_murdock b325e587-5e6c-1c3d-b8a4-0bff402e6745
va-10.0.0.11/1          insane_kare      c0bca010-43c5-5078-c23d-fbec4ee97361
va-10.0.0.11/1          silly_feynman    ed4b8fe3-a4b2-2b3a-10d2-c4b5868dd939
va-10.0.0.11/1          jovial_blackwell f3899140-a0d4-921e-f89a-00c5e0cc6f0a
va-10.0.0.12/1          modest_keller    357a8ad9-bd16-c93e-2bd4-c02c69fc0b07
va-10.0.0.12/1          grave_euler      3a248af3-89b8-0f1f-1626-5f271682a746
va-10.0.0.12/1          suspicious_mirz* 3ae74fc1-4ac8-9b66-4ee8-4e1178f68b5c
va-10.0.0.12/1          desperate_archi* 52a98dcd-6628-690c-bb6b-ec5734d5ce77
va-10.0.0.12/1          backstabbing_th* aa46363a-2726-b10c-5a9a-95c6a2509752
va-10.0.0.12/1          gloomy_liskov    abd4d701-ec2f-8238-9a79-3e863638c203
va-10.0.0.12/1          fervent_goldsti* b6378095-5092-3643-33c0-8ae40da4f073
va-10.0.0.12/1          hopeful_nobel    fb43ddbd-81ee-6d6f-a87a-5dbe0bbb1774
-------------------------------------------------------------------------------
No. of virtual machines: 25
===============================================================================

We can get more details regarding instances even MAC Address, IP Address, VPRN and EVPN.

*A:vsc01#  show vswitch-controller vports type vm enterprise "ACME Corp"

===============================================================================
Virtual Port Table
===============================================================================
VP Name                    VM Name                    VPRN    EVPN    Multicast
  VP IP Address              MacAddress                               Channel
                                                                      Map
-------------------------------------------------------------------------------
va-10.0.0.11/1/26          tender_meitner             20024   20026   Disabled
  10.10.10.35/24             1a:ff:1e:9b:b7:03
va-10.0.0.11/1/25          hungry_mclean              20024   20026   Disabled
  10.10.10.137/24            62:ff:1e:bc:3b:34
va-10.0.0.11/1/16          boring_ardinghelli         20024   20025   Disabled
  10.37.39.216/16            f6:ff:1e:55:a8:43
va-10.0.0.11/1/29          prickly_northcutt          20029   20030   Disabled
  10.37.62.63/16             c2:ff:1e:82:1c:e9
va-10.0.0.11/1/24          berserk_visvesvaraya       20024   20025   Disabled
  10.37.116.223/16           be:ff:1e:0a:09:a0
va-10.0.0.11/1/20          trusting_keller            20024   20025   Disabled
  10.37.168.238/16           8e:ff:1e:62:d8:09
va-10.0.0.11/1/23          sick_leakey                20024   20025   Disabled
  10.37.165.46/16            3a:ff:1e:2a:7a:79
va-10.0.0.11/1/21          sleepy_roentgen            20024   20025   Disabled
  10.37.123.69/16            86:ff:1e:7a:d6:6e
va-10.0.0.11/1/18          modest_chandrasekhar       20024   20025   Disabled
  10.37.119.92/16            f2:ff:1e:4a:1f:63
va-10.0.0.11/1/15          gigantic_wescoff           20024   20025   Disabled
  10.37.83.53/16             82:ff:1e:6a:a0:66
va-10.0.0.11/1/27          prickly_mahavira           20024   20026   Disabled
  10.10.10.6/24              2e:ff:1e:15:01:02
va-10.0.0.11/1/22          jovial_franklin            20024   20025   Disabled
  10.37.134.38/16            7a:ff:1e:a5:e1:2a
va-10.0.0.11/1/30          lonely_keller              20029   20030   Disabled
  10.37.100.116/16           7e:ff:1e:10:18:55
va-10.0.0.11/1/28          admiring_murdock           20024   20026   Disabled
  10.10.10.105/24            ee:ff:1e:06:3c:cc
va-10.0.0.11/1/17          insane_kare                20024   20025   Disabled
  10.37.202.88/16            16:ff:1e:0b:d1:c7
va-10.0.0.11/1/19          silly_feynman              20024   20025   Disabled
  10.37.105.245/16           1a:ff:1e:d3:b0:e7
va-10.0.0.11/1/31          jovial_blackwell           20029   20030   Disabled
  10.37.36.162/16            02:ff:1e:3f:70:09
va-10.0.0.12/1/6           modest_keller              20029   20030   Disabled
  10.37.133.221/16           a2:87:87:37:d6:68
va-10.0.0.12/1/7           grave_euler                20029   20030   Disabled
  10.37.120.66/16            f2:87:87:aa:3b:a4
va-10.0.0.12/1/1           suspicious_mirzakhani      20029   20030   Disabled
  10.37.234.60/16            4a:87:87:1f:75:d6
va-10.0.0.12/1/5           desperate_archimedes       20029   20030   Disabled
  10.37.76.239/16            66:87:87:06:c5:4d
va-10.0.0.12/1/3           backstabbing_thompson      20029   20030   Disabled
  10.37.186.42/16            16:87:87:5e:42:dc
va-10.0.0.12/1/4           gloomy_liskov              20029   20030   Disabled
  10.37.129.42/16            6e:87:87:ca:8d:40
va-10.0.0.12/1/8           fervent_goldstine          20029   20030   Disabled
  10.37.159.184/16           76:87:87:1e:c2:6b
va-10.0.0.12/1/2           hopeful_nobel              20029   20030   Disabled
  10.37.82.136/16            36:87:87:71:5b:9e
-------------------------------------------------------------------------------
No. of virtual ports: 25
===============================================================================

Show service details, it can be a VPRN or EVPN service ID. For a VPRN you can see the VRF-target configuration which is important to interconnect with VRFs on the PE. We are taking the VPRN 20024 for instance.

*A:vsc01# show service id 20024 base

===============================================================================
Service Basic Information
===============================================================================
Service Id        : 20024               Vpn Id            : 0
Service Type      : VPRN
Name              : (Not Specified)
Description       : (Not Specified)
Customer Id       : 10006
Last Status Change: 06/27/2016 17:55:03
Last Mgmt Change  : 06/27/2016 17:55:03
Admin State       : Up                  Oper State        : Up

Route Dist.       : 65534:13842         VPRN Type         : regular
AS Number         : None                Router Id         : 255.0.0.0
ECMP              : Enabled             ECMP Max Routes   : 1
Max IPv4 Routes   : No Limit            Auto Bind         : GRE
Max IPv6 Routes   : No Limit
Ignore NH Metric  : Disabled
Hash Label        : Disabled
Vrf Target        : target:65534:499
Vrf Import        : None
Vrf Export        : None
MVPN Vrf Target   : None
MVPN Vrf Import   : None
MVPN Vrf Export   : None
Car. Sup C-VPN    : Disabled
Label mode        : vrf
BGP VPN Backup    : Disabled

SAP Count         : 0                   SDP Bind Count    : 0

-------------------------------------------------------------------------------
Service Access &amp; Destination Points
-------------------------------------------------------------------------------
Identifier                               Type         AdmMTU  OprMTU  Adm  Opr
-------------------------------------------------------------------------------
vpls:backhaul-evpn20028                  rvpls        0       1500    Up   Up
vpls:evpn20025                           rvpls        0       1500    Up   Up
vpls:evpn20026                           rvpls        0       1500    Up   Up
===============================================================================

Check the EVPN 20025 for instance now.

*A:vsc01# show service id 20025 base

===============================================================================
Service Basic Information
===============================================================================
Service Id        : 20025               Vpn Id            : 0
Service Type      : VPLS
Name              : evpn20025
Description       : (Not Specified)
Customer Id       : 10006
Last Status Change: 06/27/2016 17:55:03
Last Mgmt Change  : 06/27/2016 17:55:03
Admin State       : Up                  Oper State        : Up
MTU               : 1514                Def. Mesh VC Id   : 20025
SAP Count         : 10                  SDP Bind Count    : 1
Snd Flush on Fail : Disabled            Host Conn Verify  : Disabled
Propagate MacFlush: Disabled            Per Svc Hashing   : Disabled
Allow IP Intf Bind: Enabled
InterConnect vlan*: 0                   InterConnect vlan*: 0
Def. Gateway IP   : None
Def. Gateway MAC  : None
Temp Flood Time   : Disabled            Temp Flood        : Inactive
Temp Flood Chg Cnt: 0
BGP-EVPN Encap    : vxlan
Vxlan Tenant ID   : 368626

-------------------------------------------------------------------------------
Service Access &amp; Destination Points
-------------------------------------------------------------------------------
Identifier                               Type         AdmMTU  OprMTU  Adm  Opr
-------------------------------------------------------------------------------
sap:va-10.0.0.11/1/15:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/16:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/17:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/18:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/19:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/20:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/21:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/22:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/23:0                  q-tag        1578    1578    Up   Up
sap:va-10.0.0.11/1/24:0                  q-tag        1578    1578    Up   Up
sdp:17406:368626 SB(10.0.0.11)           EvpnPmsi     0       0       Up   Down
===============================================================================
* indicates that the corresponding row element may have been truncated.

Check out the routes into vswitch-controller for a specific enterprise and domain (i.e. “dom2”)

*A:vsc01# show vswitch-controller ip-routes enterprise "ACME Corp" domain "dom2"

===============================================================================
VPRN Routes
===============================================================================

-------------------------------------------------------------------------------
Legend:
Flag : P = Primary, S = Secondary, V = Virtual Next Hop on NAT, I = IPSEC
-------------------------------------------------------------------------------
Flag Prefix/                       NextHop                       Owner
     Prefix Length
-------------------------------------------------------------------------------
---  10.37.0.0/16                  10.0.0.11                     NVC_LOCAL
---  10.37.36.162/32               va-10.0.0.11/1/31             NVC
---  10.37.62.63/32                va-10.0.0.11/1/29             NVC
---  10.37.76.239/32               va-10.0.0.12/1/5              NVC
---  10.37.82.136/32               va-10.0.0.12/1/2              NVC
---  10.37.100.116/32              va-10.0.0.11/1/30             NVC
---  10.37.120.66/32               va-10.0.0.12/1/7              NVC
---  10.37.129.42/32               va-10.0.0.12/1/4              NVC
---  10.37.133.221/32              va-10.0.0.12/1/6              NVC
---  10.37.159.184/32              va-10.0.0.12/1/8              NVC
---  10.37.186.42/32               va-10.0.0.12/1/3              NVC
---  10.37.234.60/32               va-10.0.0.12/1/1              NVC
-------------------------------------------------------------------------------
No. of IP routes: 12
-------------------------------------------------------------------------------
===============================================================================

Find out the associated ingress ACLs for an specific port

*A:vsc01# show vswitch-controller vports vport-name va-10.0.0.11/1/29 acl ingress-security

===============================================================================
Virtual Port Ingress ACL Table
===============================================================================
Pri  ACL UUID                                E-Type         Action
     SrcIP               DestIP              S-Prt[Min-Max] D-Prt[Min-Max]
     Proto               Match DSCP          FC override    Flow log/Stats log
     Reflexive ACL       Redirect Tgt                       PGID/Type
     PolicyGroupTag
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
VP Name:  va-10.0.0.11/1/29    VLAN ID:  0
-------------------------------------------------------------------------------
0    00000000-0000-0000-0000-000000000000    0x800          Drop
     0.0.0.0/0           0.0.0.0/0           0-0            0-0
     0                   0xff                n/a            False/False
     False               -                                  -
     0:0
1    00000000-0000-0000-0000-000000000000    0x806          Fwd
     0.0.0.0/0           0.0.0.0/0           0-0            0-0
     0                   0xff                n/a            False/False
     False               -                                  -
     0:0
2    00000000-0000-0000-0000-000000000000    0x0            Drop
     0.0.0.0/0           0.0.0.0/0           0-0            0-0
     0                   0xff                n/a            False/False
     False               -                                  -
     0:0
-------------------------------------------------------------------------------
No. of ACL's: 3
-------------------------------------------------------------------------------
Total No. of Ingress ACL's: 3
===============================================================================

See you!


resize and manage cloud-init on kvm with centos cloud images

$
0
0

Hi there. The following tools will help you to create virsh domains using CentOS Cloud images (those images can be downloaded from http://cloud.centos.org/centos/7/images). the difference with my previous post “USING CENTOS CLOUD IMAGES WITH VIRSH AND NUAGE METADATA” is I am not using Nuage now. I am using a simple linux bridge to eth0 interface. This could be used to create your own linux lab at home.

the bridge configuration

An example of the bridge at follow

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-bridge0
DEVICE="bridge0"
ONBOOT="yes"
TYPE=Bridge
BOOTPROTO=static
IPADDR=192.168.1.5
NETMASK=255.255.255.0

A the wired interface:

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
BOOTPROTO="static"
DEVICE="enp0s3"
NM_CONTROLLED="no"
ONBOOT="yes"
BRIDGE=bridge0

the script

I’ll create domains using a fixed IP address and resizing the boot disk. The script that I am showing it’s based on Giovanni’s Post: Create a Linux Lab on KVM Using Cloud Images.

I am highlighting some lines over the script that you may change to fit what you have. Be aware I am using virt-install and virt-resize. be sure to install them in advance.

virt-resize and qemu-img can take long depending on the size of the disk.

Check on the cloud-init details. I am using metadata to change the IP address. Also, you would change the ssh public key. However, I am setting centos’s password to reverse if you want to check this out thru console.

Don’t forget to install virt-install and libguestfs-tools packages.

#!/bin/bash

# Take one argument from the commandline: VM name
if ! [ $# -eq 5 ]; then
    echo "Usage: $0 <node-name> <memory> <vcpus> <disk-GB> <ip-address>"
    exit 1
fi

# Check if domain already exists
virsh dominfo $1 > /dev/null 2>&1
if [ "$?" -eq 0 ]; then
    echo -n "[WARNING] $1 already exists.  "
    read -p "Do you want to overwrite $1 [y/N]? " -r
    if [[ $REPLY =~ ^[Yy]$ ]]; then
        echo ""
    else
        echo -e "\nNot overwriting $1. Exiting..."
        exit 1
    fi
fi

# Directory to store images
DIR=/var/lib/libvirt/images

# Location of cloud image
IMAGE=$DIR/CentOS-6-x86_64-GenericCloud.qcow2

# Amount of RAM in MB
MEM=$2

# Number of virtual CPUs
CPUS=$3
DISK_GB=$4
IPADDR=$5
GWTY=192.168.1.254
MSK=255.255.255.0
DNS=192.168.1.1
DOMAIN=nuage.lab

# Cloud init files
USER_DATA=user-data
META_DATA=meta-data
CI_ISO=$1-cidata.iso
DISK=$1.qcow2

# Bridge for VMs (default on Fedora is bridge0)
BRIDGE=bridge0

# Start clean
rm -rf $DIR/$1
mkdir -p $DIR/$1

pushd $DIR/$1 > /dev/null

    # Create log file
    touch $1.log

    echo "$(date -R) Destroying the $1 domain (if it exists)..."

    # Remove domain with the same name
    virsh destroy $1 >> $1.log 2>&1
    virsh undefine $1 >> $1.log 2>&1

    # cloud-init config: set hostname, remove cloud-init package,
    # and add ssh-key
    cat > $USER_DATA << _EOF_
#cloud-config

# Hostname management
preserve_hostname: False
hostname: $1
fqdn: $1.$DOMAIN

# Remove cloud-init when finished with it
runcmd:
  - [ yum, -y, remove, cloud-init ]
  - echo "GATEWAY=$GWTY" >> /etc/sysconfig/network
  - echo "nameserver $DNS" >> /etc/resolv.conf
  - echo "domain $DOMAIN" >> /etc/resolv.conf
  - /etc/init.d/network restart
  - ifdown eth0
  - ifup eth0

# Configure where output will go
output:
  all: ">> /var/log/cloud-init.log"

chpasswd:
  list: |
    centos:reverse
  expire: False

# configure interaction with ssh server
ssh_svcname: ssh
ssh_deletekeys: True
ssh_genkeytypes: ['rsa', 'ecdsa']

# Install my public ssh key to the first user-defined user configured
# in cloud.cfg in the template (which is centos for CentOS cloud images)
ssh_authorized_keys:
  - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCetM2yjjUNYO8pm4IJxj8KzOWJirdOYu/VNZvhQ95hcgvi6VtgDhwFrCsPRqCzOD8+XSfI2evkvNCsj8LOpB8K3VUJxsqNzcKuv5l2157rl6+XksyH8bLHUxA2XG1zPIYeFs+2cwbNvENnKRzl7ZgEeCRYKbS+OcAOmk0+rGBx7rHTSg+MfkLtX3VgfNdUxx+ZKeAMqDkSuKSTlOZJDjIbAW0pCffp mau@nuage.lab
_EOF_

# Manging metadata cloud-init now
    cat > $META_DATA << _EOF_
instance-id: $1
local-hostname: $1
network-interfaces: |
  iface eth0 inet static
  address $IPADDR
  network ${IPADDR%.*}.0
  netmask $MSK
  broadcast ${IPADDR%.*}.255

# bootcmd:
#  - ifdown eth0
#  - ifup eth0
_EOF_
    #echo "instance-id: $1; local-hostname: $1" > $META_DATA

    echo "$(date -R) Copying template image..."
    echo "INFO: qemu-img create -f qcow2 -o preallocation=metadata $DISK ${DISK_GB}G"
    qemu-img create -f qcow2 -o preallocation=metadata $DISK ${DISK_GB}G
    virt-resize --expand /dev/sda1 $IMAGE $DISK

    echo "Converting and sizing $IMAGE to $DISK"
    #cp $IMAGE $DISK

    # Create CD-ROM ISO with cloud-init config
    echo "$(date -R) Generating ISO for cloud-init..."
    genisoimage -output $CI_ISO -volid cidata -joliet -r $USER_DATA $META_DATA &>> $1.log

    echo "$(date -R) Installing the domain and adjusting the configuration..."
    echo "[INFO] Installing with the following parameters:"
    echo "virt-install --import --name $1 --ram $MEM --vcpus $CPUS --disk \
    $DISK,format=qcow2,bus=virtio --disk $CI_ISO,device=cdrom --network \
    bridge=bridge0,model=virtio --os-type=linux --os-variant=rhel6 --noautoconsole --noapic"


    virt-install --import --name $1 --ram $MEM --vcpus $CPUS --disk \
    $DISK,format=qcow2,bus=virtio --disk $CI_ISO,device=cdrom --network \
    bridge=bridge0,model=virtio --os-type=linux --os-variant=rhel6 --noautoconsole --noapic \
    --accelerate

    #virsh console $1


    FAILS=0
    while true; do
        ping -c 1 $IPADDR >/dev/null 2>&1
        if [ $? -ne 0 ] ; then #if ping exits nonzero...
           FAILS=$[FAILS + 1]
           echo "INFO: Checking if server $1 with IP $IPADDR is online. ($FAILS out of 20)"
        else
           echo "INFO: server $1 is alive. let's remove cloud init files"
           break
        fi
        if [ $FAILS -gt 20 ]; then
           echo "INFO: Server is still offline after 20min. I will end here!"
           exit
        fi
        sleep 60
    done

    # Eject cdrom
    echo "$(date -R) Cleaning up cloud-init..."
    virsh change-media $1 hda --eject --config >> $1.log

    # Remove the unnecessary cloud init files
    rm $USER_DATA $CI_ISO

    echo "$(date -R) DONE. SSH to $1 using $IP, with  username 'centos'."

popd > /dev/null

Running the script

The expected output is:

[root@localhost ~]# ./virt-create.sh test 1024 1 12 192.168.1.7
[WARNING] test already exists.  Do you want to overwrite test [y/N]? y

Mon, 01 Aug 2016 10:57:21 -0500 Destroying the test domain (if it exists)...
Mon, 01 Aug 2016 10:57:22 -0500 Copying template image...
INFO: qemu-img create -f qcow2 -o preallocation=metadata test.qcow2 12G
Formatting 'test.qcow2', fmt=qcow2 size=12884901888 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
Examining /var/lib/libvirt/images/CentOS-6-x86_64-GenericCloud.qcow2 ...
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
**********

Summary of changes:

/dev/sda1: This partition will be resized from 8.0G to 12.0G.  The
    filesystem ext4 on /dev/sda1 will be expanded using the 'resize2fs'
    method.

    **********
Setting up initial partition table on test.qcow2 ...
Copying /dev/sda1 ...
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
Expanding /dev/sda1 using the 'resize2fs' method ...

Resize operation completed with no errors.  Before deleting the old disk,
carefully check that the resized disk boots and works correctly.
Converting and sizing /var/lib/libvirt/images/CentOS-6-x86_64-GenericCloud.qcow2 to test.qcow2
Mon, 01 Aug 2016 11:00:47 -0500 Generating ISO for cloud-init...
Mon, 01 Aug 2016 11:00:47 -0500 Installing the domain and adjusting the configuration...
[INFO] Installing with the following parameters:
virt-install --import --name test --ram 1024 --vcpus 1 --disk     test.qcow2,format=qcow2,bus=virtio --disk test-cidata.iso,device=cdrom --network     bridge=bridge0,model=virtio --os-type=linux --os-variant=rhel6 --noautoconsole --noapic
WARNING  KVM acceleration not available, using 'qemu'

Starting install...
Creating domain...                                                                                                                                        |    0 B  00:00:00
Domain creation completed.
INFO: Checking if server test with IP 192.168.1.7 is online. (1 out of 20)
INFO: Checking if server test with IP 192.168.1.7 is online. (2 out of 20)
INFO: Checking if server test with IP 192.168.1.7 is online. (3 out of 20)
INFO: Checking if server test with IP 192.168.1.7 is online. (4 out of 20)
INFO: Checking if server test with IP 192.168.1.7 is online. (5 out of 20)
INFO: server test is alive. let's remove cloud init files
Mon, 01 Aug 2016 11:06:03 -0500 Cleaning up cloud-init...
Mon, 01 Aug 2016 11:06:03 -0500 DONE. SSH to test using , with  username 'centos'.

See you later


Dockers – Nuage shows off micro-segmentation on containers.

$
0
0

Yes! I could dig into containers finally.
My first impression is they are faster and easier to handle. I am seeing a hard future for hypervisors. However, Apps must to be designed to work on that sort of constant change.

Nuage brings a really good microsegmentation to them and help us to bring advanced routing, security and networks settings (i.e. forwarding, QoS)

I am not covering how to install nuage and how to make it works with dockers. This would be a future subject on this site.

I am also using KVM to show how can we connect VMs with containers in a same subnet or Layer-3 domain. Nice! don’t you think! details how I am using KVM could be find over my post: USING CENTOS CLOUD IMAGES WITH VIRSH AND NUAGE METADATA

Enough said… now it’s time to play.

some useful scripts

I’ve done some scripts to provision and remove containers. Next one is called create_cont.sh. You need to enter just the network name. If you need to change zones or domain, you will have to do it inside the script.

#!/bin/bash
# usage: create_cont.sh <nuage_subnet_name> <how-many-containers>
COUNTER=0
while [  $COUNTER -lt $2 ]; do
    docker run -d -i -t -e "NUAGE-DOMAIN=dom2" -e "NUAGE-ZONE=zone0" -e "NUAGE-NETWORK=${1}" -e "NUAGE-ENTERPRISE=ACME Corp" -e "NUAGE-USER=mau" --net=none centos /bin/bash
    echo creating container $COUNTER ...
    let COUNTER=COUNTER+1
done

You have to be able to remove as fast as you create them:

#!/bin/bash
# usage: remove_cont.sh
for i in $( docker ps -q ); do
    docker stop $i
    docker rm $i
    echo removing $i ...
done

That’s all, pretty easy, don’t you think.

Cheat list of commands

This is the list of commands I will be using over this demo.

# bash access to container
docker exec -i -t <container-name> /bin/bash
# list containers
docker ps
# define domain virsh
virsh define <xml file>
# start domain
virsh start <domain name>
# list domains
virsh list --all

Playing with containers on two different computes

I’ve created 10 containers connected to subnet0 as the following:

[root@compute01 cheat_cont]# ./create_cont.sh subnet0 10
8d14224deaeda50c182c8a5154e96516695c11a7d49214589390c8e6fe369d55
creating container 0 ...
57e8e917974f2b63ba3572d7e3752f01ea0e7eaee93baf78c8d1fb51a72ab026
creating container 1 ...
c0bca01043c55078c23dfbec4ee9736118210d1eb7b57b75f3fa38558f6d4d3f
creating container 2 ...
8646cf049d24f317ebcf7901c2b4590f26aa74cf4829be7ae7d3ccf57fd9c57f
creating container 3 ...
ed4b8fe3a4b22b3a10d2c4b5868dd939e98f1aeb7e652a663e4d6199bea4c710
creating container 4 ...
76850489155db55ddda6dc8f2d7299562b3fc740977060466d42258c530e1c3d
creating container 5 ...
80142e10dc97071a620728e33b2a21660ca6721ffd7c3f215103baa1403acd18
creating container 6 ...
9258a63d05f0bf04babeee107b97e9612bb9777cea96da28d82dba8063e2463f
creating container 7 ...
77e628c93939f62051d335cacbd90f5c51642cbfc18907694d3c9b19d927630f
creating container 8 ...
732e4e99d689146dfd51681d8b80946a4950a74ca143a95e5620ed64130d1017
creating container 9 ...

Let’s check them out

[root@compute01 cheat_cont]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
732e4e99d689        centos              "/bin/bash"         21 seconds ago      Up 19 seconds                           berserk_visvesvaraya
77e628c93939        centos              "/bin/bash"         22 seconds ago      Up 20 seconds                           sick_leakey
9258a63d05f0        centos              "/bin/bash"         24 seconds ago      Up 22 seconds                           jovial_franklin
80142e10dc97        centos              "/bin/bash"         26 seconds ago      Up 24 seconds                           sleepy_roentgen
76850489155d        centos              "/bin/bash"         27 seconds ago      Up 26 seconds                           trusting_keller
ed4b8fe3a4b2        centos              "/bin/bash"         29 seconds ago      Up 27 seconds                           silly_feynman
8646cf049d24        centos              "/bin/bash"         31 seconds ago      Up 29 seconds                           modest_chandrasekhar
c0bca01043c5        centos              "/bin/bash"         33 seconds ago      Up 31 seconds                           insane_kare
57e8e917974f        centos              "/bin/bash"         34 seconds ago      Up 33 seconds                           boring_ardinghelli
8d14224deaed        centos              "/bin/bash"         36 seconds ago      Up 34 seconds                           gigantic_wescoff

Check out how they are shown into VSD

containers nuage dockers network microsegmentation

Let’s try one now and ping anyplace.

[root@compute01 cheat_cont]# docker exec -i -t gigantic_wescoff /bin/bash
[root@8d14224deaed /]# ping 10.37.134.38
PING 10.37.134.38 (10.37.134.38) 56(84) bytes of data.
64 bytes from 10.37.134.38: icmp_seq=1 ttl=64 time=120 ms
64 bytes from 10.37.134.38: icmp_seq=2 ttl=64 time=0.065 ms
64 bytes from 10.37.134.38: icmp_seq=3 ttl=64 time=0.064 ms
64 bytes from 10.37.134.38: icmp_seq=4 ttl=64 time=0.067 ms
64 bytes from 10.37.134.38: icmp_seq=5 ttl=64 time=0.066 ms

Now. I will create 4 containers into other compute server and I will check out connectivity among containers over both.

[root@compute01 cheat_cont]# ./create_cont.sh subnet1 4
497c0ee30696fe6635f7cad3ecebc72bad16b53b3e089921af30a96c30fd9325
creating container 0 ...
35c5fcc911f1b80919ae6d0167702e2cb83fedb9b7fa907710c83995e1195e51
creating container 1 ...
91233ace7512fa0bf674da68ba71c47035a1add6c345ce112e334737665e0943
creating container 2 ...
b325e5875e6c1c3db8a40bff402e674574f17a564432603a51a5e4d59998b045
creating container 3 ...

Check out on them

[root@compute01 cheat_cont]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
b325e5875e6c        centos              "/bin/bash"         17 seconds ago      Up 16 seconds                           admiring_murdock
91233ace7512        centos              "/bin/bash"         19 seconds ago      Up 17 seconds                           prickly_mahavira
35c5fcc911f1        centos              "/bin/bash"         20 seconds ago      Up 18 seconds                           tender_meitner
497c0ee30696        centos              "/bin/bash"         22 seconds ago      Up 20 seconds                           hungry_mclean

Ping now to check if I can connect to the same subnet

[root@compute01 cheat_cont]# docker exec -i -t hungry_mclean /bin/bash
[root@497c0ee30696 /]# ping 10.37.134.38
PING 10.37.134.38 (10.37.134.38) 56(84) bytes of data.
64 bytes from 10.37.134.38: icmp_seq=2 ttl=63 time=98.4 ms
64 bytes from 10.37.134.38: icmp_seq=3 ttl=63 time=0.086 ms
64 bytes from 10.37.134.38: icmp_seq=4 ttl=63 time=0.073 ms
64 bytes from 10.37.134.38: icmp_seq=5 ttl=63 time=0.087 ms

Done! See you


Bridge your dummy interface in centos7

$
0
0

howdy,

I had to start using a dummy interface in bring mobility to my lab. It’s a demo in-a-box. Every time that I’ve tried to disconnect it and use other network. I had to make a lot of changes to my configuration: it’s messy.

Create your permanent dummy interface

I had to make the following change to my box:

[root@box01 ~]# cat /etc/modprobe.d/dummy.conf
install dummy /sbin/modprobe --ignore-install dummy; /sbin/ip link set name ethdummy1 dev dummy0
[root@box01 ~]# cat /etc/modules-load.d/dummy.conf
# Load dummy.ko at boot
dummy
[root@box01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ethdummy1
NAME=ethdummy1
DEVICE=ethdummy1
MACADDR=00:22:22:ff:ff:ff
IPADDR=10.10.10.1
NETMASK=255.255.255.0
ONBOOT=yes
TYPE=Ethernet
NM_CONTROLLED=no

After reboot, check if you dummy is there:

[root@box01 ~]# ifconfig ethdummy1
ethdummy1: flags=195<UP,BROADCAST,RUNNING,NOARP>  mtu 1500
        inet 10.10.10.1  netmask 255.255.255.0  broadcast 10.10.10.255
        ether 00:22:22:ff:ff:ff  txqueuelen 0  (Ethernet)

Got i from this forum thread: Permanent dummy interface

turn your dummy into a bridge

I’ll bridge ethdummy to bridge0:

[root@box01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ethdummy1
NAME=ethdummy1
DEVICE=ethdummy1
#MACADDR=00:22:22:ff:ff:ff
#IPADDR=10.10.10.1
#NETMASK=255.255.255.0
ONBOOT=yes
#TYPE=Ethernet
NM_CONTROLLED=no
BRIDGE=bridge0
[root@box01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bridge0
DEVICE="bridge0"
ONBOOT="yes"
TYPE=Bridge
BOOTPROTO=static
IPADDR=10.10.10.1
NETMASK=255.255.255.0

Restart network services: service network restart
And let’s check this out.

[root@box01 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
bridge0		8000.002222ffffff	no		ethdummy1

Important Notes

Don’t forget to use iptables to translate IPs and get Internet access from your VMs connected to this new bridge (servers need to use dummy interface as the default gateway: 10.10.10.1)

# You need to translate your IPs thru eth0
# eth0 is the interface connected to the Network
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

That’s all
See ya!


Viewing all 125 articles
Browse latest View live