this would list 'Nailgun Super-tasks' (we are going to rename them to transactions ,btw in the future). then just use `fuel2 task history show <deployment_supertask_id>` to see task statuses and filter by --status <error|running|pending>

I'm having problems configuring the Network settings in Fuel 8.0. I get the following error: verification failed Expected VLAN (not received) over the interfaces related with the Storage Network. My "storage network" is over a Infiniband Switch "Mellanox IS5025" Unmanaged, which means it is plug and play. So, I can not configure PK (VLANs). There is a incompiblility problem between VLAN and PK

alright gentlemen, I have an issue and have no clue what caused it (and am debugging how to fix). Is there any time that the br-ex interface on a fuel controller should change IP, same for br-mgmt? I have a 1 controller deploy and the IPs all shuffled around sometime somehow (it didn't even reboot, uptime was nearly 3 months) and now all sorts of issues are coming up using services

for example, /etc/network/interfaces.d/ifcfg-br-mgmt is set to 192.168.0.3, and its running with that IP. However, my openrc file has 192.168.0.2 configured (and that worked fine until everything else stopped working)

So I had manually changed /etc/network/interfaces.d/br-ex to the old IP prior hoping it would resolve the issue, but it didn't. I've reset it back to how it was at the start of this brokenness, rebooted, and now vip__public is started. The only thing that `pcs status` shows failed is p_ntp_montiro_20000 is not running, and PCSD Status shows 192.168.0.3: Offline

just poking around more, but `fuel --env 1 network --download` shows public_vip as 192.168.5.3 (old IP) whereas br-ex on controller has 192.168.5.4 (random new IP). public_vrouter_vip on fuel is set to 192.168.5.2

aaand rebooted again, but now `pcs status` shows everything as stopped. /var/log/pacemaker shows a bunch of errors about Resource * cannot run anywhere. Up above those it says node controller1 has combined system health of -1000000 - does that mean anything to anyone? This is my only controller

ok ok ok forget all that IP changing non-sense, I may not know what I'm talking about and have to track that aspect down later. I think my real issue was /var/log/ was almost full and none of the resources from `pcs status` would start because of that (maybe that is why controller wasn't considered healthy)

Which brings up the question...I have some rather large logs that don't seem to be rotated - I would've thought fuel would've had rotation on all the logs (especially since default log partition is pretty small). ceph-client.radosgw.gateway.log is pretty large compared to others (1.5GB), ceph-mon.controller.log ~500MB, conntracd-stats.log ~ 500MB, etc.

ah, well I had already rebooted :) so I need to learn more about pacemaker I think. If all services are stopped, it still continues to check and try to start them? And that command flips the bit on error condition

on 8.0 there are some things I wish were a bit different (not sure 9.0 addresses, though). For example by default it seems to use identity v2 which makes it non-trivial (for me anyways) to add in idP / SSO integration

so fuel community 9 == MOS 9.0? Is community more or less just the more bleeding edge version of MOS but still stable? It's a small cluster but still for enterprise so we do want something pretty stable

sure. It's just a pain, getting fuel 8 installed was painful for our environment (remote over idrac, fuel-menu didn't play well in virtual console, etc....), but maybe worth it if there are some good improvements over 8

so 9 doesn't use identity v3 yet either? Do you happen to know if it's not too bad to manually configure on 8.0 and/or 9.0 without breaking too much to make use of web sso or the other auth mechanisms? I briefly tried the fuel ldap plugin which configures some of v3, at least, but it had strange issues for me

sorry for all the questions, you've been really helpful though. I'm still fairly new (at least to troubleshooting, been running MOS 8.0 dev environment for ~4 months or so, but no issues until today, it just worked)

we are spinning up a cluster at work now and I started with MOS 8.0 this week, but if community 9.0 is still pretty stable maybe we should run with that instead (not sure if we can wait longer for MOS 9.0)...Would like to get support contract one day but gotta get the team on board first that openstack is key part of our future (I think so...)

sure. The downside is the upgrade path in past from what I've read requires some extra nodes / hardware essentially as you swap around controllers. Maybe by that time we will have more, but today we do not as this is first deploy just to get our feet wet and try to start using openstack for some infrastructure

8 may be ok, it's the identity v3 pieces that is the main thing missing, and/or some other components I'd like to setup and install that fuel didn't handle (designate, lbaas, trove) - not sure how easy it is to install those on top of the mos 8.0 deploy (I think I saw somewhere someone mentioning dependency issues for designate/trove because of minor package differences in mos components)

and/or know anything about their training options? I'm mostly curious if they dive into fuel specifics (fuel cli, HA setup, etc.) - all the issues I started messing with today. I'm familiar with openstack cli and how to use the components after they are installed, but ensuring things are running right seems to be the tricky part for me...but I'm worried the trainings will focus more on the openstack cli and basic usage more than sys admin

thanks for all the help mwhahaha, let your boss know you were awesome and helpful to someone today and have made good strides towards gaining another enterprise support customer if I can pull it off ;)