**********************************************************1. F19,F20 have been installed via volumes based on glusterfs and show a good performance on Compute node. Yum works stable on F19 and a bit slower on F20.

5.On any cloud instance MTU should be set to 1454 for proper communications with GRE tunnel
Post bellow follows up two Fedora 20 VMs setup described in :- http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txthttp://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
Both cases have been tested above - default and non-default libvirt's networks
In meantime I believe that using Libvirt's networks for creating Controller and Compute nodes as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling for RDO Havana on Fedora 20 manual setup.
Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don't start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services :-
openstack-nova-conductor & openstack-nova-scheduler wouldn't start if mysql.users table won't be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.

Cloud instances running on Compute perform commands like nslookup,traceroute. Yum install & yum -y update work on Fedora 19 instance, however, in meantime time network on VF19 is stable, but relatively slow . Might be Realtek 8169 integrated on board is not good enough for GRE and it's problem of my hardware ( dfw01 built up with Q9550,ASUS P5Q3, 8 GB DDR3, SATA 2 Seagate 500 GB). CentOS 6.5 with "RDO Havana+Glusterfs+Neutron VLAN" works on same box (dual booting with F20) much faster.

Next we install X windows on F20 to run fluxbox ( by the way after hours
of googling I was unable to find requied set of packages and just
picked them up
during KDE Env installation via yum , which I actually don't need at all on cloud instance of Fedora )

So, loading cloud instance via `nova boot --user-data=./myfile.txt ....` allows to get access to command line and set MTU for eth0 to 1454 , this makes instance available for ssh connections from Controller and Compute Nodes and also makes possible Internet Surfing instances fedora 19,20, Ubuntu 13.10 Server .Light weight X Windows setup has been used for all cloud instances mentioned above.

************************************************************************* On Controller (192.168.1.127) and on Compute (192.168.1.137)
*************************************************************************

***********************************************************************************
Update on 03/11/2014.
***********************************************************************************
Standard schema via `cinder create --image-id IMAGE_ID --display_name VOL_NAME SIZE ` && ` nova boot --flavor 2 --user-data=./myfile.txt --block_device_mapping vda=VOLUME_ID:::0 INSTANCE_NAME` started to work fine. Schema described in previous UPDATE 03/09/14 on the contrary stopped to work smoothly on glusterfs based cinder's volumes.
However, ending up with "Error" status it creates glusterfs cinder volume ( with system_id ) , which is quite healthy and may be utilized for building new instance of F20 or Ubuntu 14.04, whatever was original image, via CLI or Dashboard. It looks like a kind of bug in Nova&Neutron interprocess communications. I would say synchronization at boot up.
Please view :-

TigerVNC Viewer 64-bit v1.3.0 (20140121)
Built on Jan 21 2014 at 09:40:20
Copyright (C) 1999-2011 TigerVNC Team and many others (see README.txt)
See http://www.tigervnc.org for information on TigerVNC.

The original text of documents was posted on fedoraproject.org by Kashyap.
Atached ones tuned for new IP's and should not have any more typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed ./etc/sysconfig/iptables updated on Controller and Compute Nodes. Lines below commented out :-

# -A FORWARD -j REJECT --reject-with icmp-host-prohibited# -A INPUT -j REJECT --reject-with icmp-host-prohibited
To be able set up Gluster 3.4.2 cluster and use gluster replica 2 volume as storage for Cinder.
MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
In meantime I am quite sure that using Libvirt's default and non-default networks for creating Controller and Compute nodes as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.
References