alp317's profile -
activity

You are mapping the provider vlan to bridge br-vlan (network_mappings: "vlan:br-vlan,vxlan:br-vxlan"). That means there should be an OVS bridge called "br-vlan" on the compute and controller nodes, that's responsible for traffic into and out of the nodes for OpenStack networks using "vlan" provider.

Note: There is a difference in VLAN type network and network provider. You are using vlan as the network provider name also, which can lead to confusion.

From Horizon when i view all the resources. I can see the "asg", when i open that up it further contains another resource which points to the nested template. If i open that nested resource i can see that it generated the outputs, however, they are not passed to "asg".

If the autoscaling group resource name is "asg", then i do get_attr: [asg]. This shows that attributes like outputs and outputs_list are empty along with the current_size. I am using nested templates. Nested template contains a load balanced server. There are outputs declared in that template.

I am using AutoScalingGroup to auto scale a galera cluster. This also includes load balancing. I found an autoscaling example at https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml. I am successfully able to run this, however the challenge i am facing is that AutoScalingGroup is not getting any outputs from the nested "lb_server.yaml" template. The outputs are defined but they are not delivered to the AutoScalingGroup.

A similar question has been asked before, but its pretty old and there is no answer on that one.

Now this works fine for most scenarios. But sometimes the python script fails because it couldn't find the desired file.

At this point i want my Stack creation (overcloud deployment) to fail. But it keeps going and then the configure script also fails.
Can anyone please tell me how to add checks here that ensures that python script exited with status code 0.

I was experimenting with OVS-DPDK based deployment of OpenStack (using tripleo). In the documentation that I followed they suggested to use different datapaths for control plane networks and tenant (VM) networks. For control plane networks Linux bonds were used. While experimenting I deployed the OpenStack with internal api network (a control plane network) on OVS-DPDK, to my surprise the network was working. But after testing it out I found out it was giving very poor performance, around 150 -250 Mbits/s on a 20 Gbits/s bonded network. While the networks on Linux bond were working fine. Now after reading a lot of questions on forums I couldn’t find the answer to this question. Other than a casual mention that it’s a rule of thumb to not use kernel and dpdk datapath ports on the same bridge. Also using the ethtool I found that the tagged vLAN network that I created over dpdk bridge is showing a link speed of 10 Mb/s. Can someone please explain what’s happening here.

I finally figured it out. Nova's default thread policy is require. That means it will only use vCPUs that also have their sibling present in the nova vcpu list. As i was dedicating first 8 (0..7) to host their siblings were not used by nova, hence the missing 8 vCPUs. Solution was to use sibling set

Yes i verified that instances are pinned to single NUMA Nodes by looking into cpuset tag inside libvirt.xml. And 4 vcpus are free on both nodes. I can see that it created 3 instances on each NUMA Node.

I have a compute node with 40 vcpus. I assigned 32 to nova scheduler and isolated those vcpus by using isolcpus. Now i am creating VM's using flavor metadata hw:cpu_policy=dedicated and hw:numa_nodes=1 but i can only SPAWN 6 instances of 4 vcpus. So i am using 24 vcpus. I can see in the horizon under hypervisors that i still have 8 vcpus. But i cannot SPAWN the 7th instance using the same flavor.