Fore!Stray shots from Dave Minerhttps://blogs.oracle.com/dminer/feed/entries/atom2015-07-10T16:36:38+00:00Apache Rollerhttps://blogs.oracle.com/dminer/entry/safely_updating_a_solaris_openstackUpgrading Solaris Engineering's OpenStack CloudDave Miner-Oracle 2015-07-10T16:36:38+00:002015-07-10T16:36:38+00:00<p>The <a href="http://www.oracle.com/technetwork/server-storage/solaris11/overview/beta-2182985.html">Solaris 11.3 Beta release</a> includes an update to the bundled OpenStack packages from the Havana&nbsp;version to Juno<sup>1</sup>.&nbsp; Over on the <a href="https://blogs.oracle.com/openstack/">OpenStack blog</a> my colleague Drew Fisher has a <a href="https://blogs.oracle.com/openstack/entry/upgrading_openstack_from_havana_to">detailed post</a> that looks under the covers at the work the community and our Solaris development team did to make this <b>major</b> upgrade as painless as possible. Here, I'll talk about applying that upgrade technology from the operations side, as we recently performed this upgrade on the internal cloud that we're operating for Solaris engineering.&nbsp; See my <a href="https://blogs.oracle.com/dminer/entry/building_an_openstack_cloud_for">series of posts</a> from last year on how our cloud was initially constructed.&nbsp; <br /></p>
<h3>Our Upgrade Process <br /></h3>
<p>The first thing to understand about our upgrade process is that, since the Solaris Nova driver as yet lacks live migration support, we can't upgrade compute nodes without an outage for the guest instances.&nbsp; We also don't yet have an HA configuration deployed for the database and all the services, so those also require outages to upgrade<sup>2</sup>.&nbsp; Therefore, all of our upgrades have downtime scheduled for the entire cloud and we attempt to upgrade all the nodes to the same build.&nbsp; We typically schedule two hours for upgrades.&nbsp; If everything were to go smoothly we could be done in less than 30 minutes, but it never works out that way, at least so far.</p>
<p>Right now, we're still doing the upgrades fairly manually, with a small script that we run on each node in turn.&nbsp; That script looks something like:</p>
<pre><span style="background-color: rgb(255, 255, 255);"># shut down puppet so that patches don't get pulled before they are required
svcadm disable -t puppet:agent
# shut down zones so update goes more quickly, use synchronous to wait for this
svcadm disable -ts zones
# Disable nova API and BUI; we use temporary for API so it will
# come back on reboot but persistent for BUI so that it's not available
# until we're ready to end the outage.
# Dump database for disaster recovery
if [[ $node == "cloud-controller" ]]; then
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; svcadm disable -t nova-api-osapi-compute
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; svcadm disable apache22
mysqldump --user=root --password='password' --add-drop-database --all-databases &gt;/tank/all_databases.sql
fi
pkg update --be-name solaris11_3 -C 5</span></pre><span style="background-color: #c3ecff;"> </span>
<p>Once the script completes, we can reboot the system.&nbsp; The comment above about Puppet relates to specifics in how we are using it; since we sometimes have bugs in the builds that we can work around, we typically use Puppet to distribute those workarounds, but we don't want them to take effect until we've rebooted into the new boot environment.&nbsp; There's almost certainly a better way to do this, we're just not that smart yet ;-)<br /></p>
<p>We run the above script on all the nodes in parallel, which is fine to do because the upgrade is always creating a new boot environment and we'll wait until all of the core service nodes (keystone, cinder, nova controller, neutron, glance) are done before we reboot any of them.&nbsp; We don't necessarily wait for all of the compute nodes since they can take longer if any of the guests are non-global zones, and they are the last thing we reboot anyway.</p>
<p>Once the updates are complete, we reboot nodes in the following order:</p>
<ol>
<li>Nova controller - MySQL, RabbitMQ, Keystone, Nova api's, Heat <br /></li>
<li>Neutron controller</li>
<li>Cinder controller</li>
<li>Glance</li>
<li>Compute nodes</li>
</ol>
<p>This order minimizes disruptions to the services connecting to RabbitMQ and MySQL, which have been a point of fragility for many operators of OpenStack clouds.&nbsp; It also ensures that the compute nodes don't see disruptions to iSCSI connections for running zones, which we've seen occasionally lead to ZFS pools ending up in a suspended state.&nbsp; As we build out the cloud we'll be separating the functions that are in the Nova controller into separate instances, which will necessitate some adjustments to this sequencing, but the basic idea is to work from the database to rabbitmq to keystone to the nova services.</p>
<h3>Verifying the Upgrade Worked<br /></h3>
<p>Once we've rebooted the nodes we run a couple of quick tests to launch both SPARC and x86 guests, ensuring that basically all of the machinery is working.&nbsp; I've started doing this with a fairly simple Heat template:</p>
<pre>heat_template_version: 2013-05-23
description: &gt;
HOT template to deploy SPARC &amp; x86 servers as a quick sanity test
parameters:
x86_image:
type: string
description: Name of image to use for x86 server
sparc_image:
type: string
description: Name of image to use for SPARC server
resources:
x86_server1:
type: OS::Nova::Server
properties:
name: test_x86
image: { get_param: x86_image }
flavor: 1
key_name: testkey
networks:
- port: { get_resource: x86_server1_port }
x86_server1_port:
type: OS::Neutron::Port
properties:
network: internal
x86_server1_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: external
port_id: { get_resource: x86_server1_port }
sparc_server1:
type: OS::Nova::Server
properties:
name: test_sparc
image: { get_param: sparc_image }
flavor: 1
key_name: testkey
networks:
- port: { get_resource: sparc_server1_port }
sparc_server1_port:
type: OS::Neutron::Port
properties:
network: internal
sparc_server1_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: external
port_id: { get_resource: sparc_server1_port }
outputs:
x86_server1_public_ip:
description: Floating IP address of x86 server in public network
value: { get_attr: [ x86_server1_floating_ip, floating_ip_address ] }
sparc_server1_public_ip:
description: Floating IP address of SPARC server in public network
value: { get_attr: [ sparc_server1_floating_ip, floating_ip_address ] }
</pre>
<p> </p>
<p>Once that test runs successfully, we declare the outage over and re-enable the Apache service to restore access to the Horizon dashboard.</p>
<h3>Our Upgrade Experiences<br /></h3>
<p>Since we went into production almost a year ago, we've upgraded the
entire cloud infrastructure, including the OpenStack packages, seven
times.&nbsp; Had we met our goals we would have done the upgrade every two
weeks as each full Solaris development build is released internally (and
thus would have done over 20 upgrades), but the reality of running at
the bleeding edge of the operating system's development is that we find
bugs, and we've had several that were too serious and too difficult to
work around to undertake upgrades, so we've had to delay a number of
times while we waited for fixes to integrate. Through all of this, we've
learned a lot and are continually refining our upgrade process.</p>
<p>So far, we've only had one upgrade over the last year that was unsuccessful, and that was reasonably painless, since we just re-activated the old boot environment on each node and rebooted back to it.&nbsp; We now pre-stage each upgrade on a single-node stack that's configured similarly to the production cloud to verify there aren't any truly catastrophic problems with kernel zones, ZFS, or networking.&nbsp; That's mostly been successful, but we're going to build a small multi-node cloud for the staging to ensure that we can catch issues in additional areas such as iSCSI that aren't exercised properly by the single-node stack.&nbsp; The lesson, as always, is to have your test environment replicate production as closely as possible.<br /></p>
<p>For this particular upgrade, we did a lot more testing; I spent the better part of two weeks running trial upgrades from Havana to Juno to shake out issues there, which allowed the development team to fix a bunch of bugs in the configuration file and database upgrades before we went ahead with the actual upgrade.&nbsp; Even so, the production upgrade was more of an adventure than we expected.&nbsp; We ran into three issues:</p>
<ol>
<li>After we rebooted the controller node, the heat-db service went into maintenance.&nbsp; The database had been corrupted by the service exceeding its start method timeout, which caused SMF to kill and restart it, and that apparently happened at a very inopportune time.&nbsp; Fortunately we had made little use of heat with Havana and we could simply drop the database and recreate it.&nbsp; The SMF method timeout is being fixed (for heat-db and other services), though that fix isn't in the 11.3 beta release.&nbsp; We're also having some discussion about whether SMF should generally default to much longer start method timeouts.&nbsp; We find that developers are consistently overly optimistic about the true performance of systems in production and short timeouts of 30 seconds or 1 minute that are often used are more likely to cause harm than good.</li>
<li>The puppet:master service went into maintenance when that node was rebooted, with truss we determined that for some reason it was attempting to kill labeld, failing, and exiting.&nbsp; This is still being investigated, we've had difficulty reproducing it.&nbsp; Fortunately, disabling labeld worked around the problem and we were able to proceed.</li>
<li>After we had resolved the above issues, the test launches we use to verify the cloud is working would not complete - they'd be queued but not actually happen.&nbsp; This took us over an hour to diagnose, in part because we're not that experienced with RabbitMQ issues, to that point it had &quot;just worked&quot;.&nbsp; It turned out that we were victims of the default file descriptor limit for RabbitMQ, at 256, being too low to handle all of the connections from the various services using it.&nbsp; Apparently Juno is just more resource-hungry in this respect than Havana, and it's not something we could have observed in the smaller test environment.&nbsp; Adding a &quot;ulimit -n 1024&quot; to the rabbitmq start method worked around this for now; this has sparked some internal discussion on whether the default limits should be increased, as yet unresolved.&nbsp; The values are relics from many years ago and likely could use some updating.</li>
</ol>
<p>Overall, this upgrade clocked in at a bit over 4 hours of downtime, not the 3 hours that we'd scheduled.&nbsp; Happily, our cloud has run very smoothly in the weeks since the upgrade to Juno, and our users are very pleased with the much-improved Horizon dashboard.&nbsp;&nbsp;&nbsp; We're now working our way through a long list of improvements to our cloud and getting the equipment in place to move to an HA environment, which will let us move towards our goal of rolling, zero-downtime upgrades.&nbsp; More updates to come!</p>
<h4>Footnotes<br /></h4>
<ol>
<li>If you're following the OpenStack community, you'll ask,
&quot;What about Icehouse?&quot;&nbsp; Well, we skipped it, in order to get closer to
the community releases more quickly.</li>
<li>I am happy to note that, in spite of this lack of HA, we've had only a few
minutes of unscheduled service interruptions over the course of the year,
due mostly to panics in the Cinder or Neutron servers.&nbsp; That seems pretty good
considering the bleeding-edge nature of the software we're running</li>
</ol>https://blogs.oracle.com/dminer/entry/heating_up_your_openstack_cloudHeating Up Your OpenStack CloudDave Miner-Oracle 2014-10-28T21:49:59+00:002014-10-29T21:53:41+00:00<p>As part of the support updates to <a href="http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.html">Solaris 11.2</a>, we recently added the <a href="https://wiki.openstack.org/wiki/Heat">Heat orchestration engine</a> to our <a href="http://www.openstack.org/">OpenStack</a> distribution.&nbsp; If you aren't familiar with Heat, I highly recommend getting to know it, as you'll find it invaluable in deploying complex application topologies within an OpenStack cloud.&nbsp; I've updated the <a href="https://blogs.oracle.com/dminer/resource/openstack_scripts.tar.gz">script tarball</a> from my recent series on building the Solaris engineering cloud to include configuration of Heat, so if you download that and update your cloud controller to the latest SRU, you can run <font face="courier new,courier,monospace">havana_setup.py heat </font>to turn it on.</p>
<p>OK, once you've done that, what can you do with Heat?&nbsp; Well, I've added a script and a Heat template that it uses to the tarball to give you at least one idea.&nbsp; The script, <font face="courier new,courier,monospace">create_image</font>, is similar to a script that we run to create custom Unified Archive images internally for the Solaris cloud.&nbsp; The basic idea is to deploy an OpenStack instance using the standard archive that release engineering constructs for the product build, add some things we need to it, then save an image of that for the users of the cloud to use as a base deployment image.&nbsp; I'd originally written a script to do this using the nova CLI, but using a Heat template simplified it.&nbsp; The <font face="courier new,courier,monospace">simple.hot</font> file in the tarball is the template that it uses; that template is a simpler version of a <a href="https://github.com/openstack/heat-templates/blob/master/hot/servers_in_existing_neutron_net.yaml">two-node template</a> from the <a href="https://github.com/openstack/heat-templates">heat-templates repository</a>.&nbsp; It's fairly self-explanatory so I'm not going to walk through it here.<br /></p>
<p> As for create_image itself, the standard Solaris archive contains the packages in the solaris-minimal-server group, a pretty small package set that really isn't too useful for anything itself, but makes a nice base to build images that include the specific things you need.&nbsp; In our case, I've defined a group package that pulls in a bunch of things we typically use in Solaris development work: ssh client, LDAP, NTP, Kerberos, NFS client and automounter, the man command, and less.&nbsp; Here's what the main part of the package manifest looks like:</p>
<pre>depend fmri=/network/ssh type=group
depend fmri=group/system/solaris-minimal-server type=group
depend fmri=ldapcert type=group
depend fmri=naming/ldap type=group
depend fmri=security/nss-utilities type=group
depend fmri=service/network/ntp type=group
depend fmri=service/security/kerberos-5 type=group
depend fmri=system/file-system/autofs type=group
depend fmri=system/file-system/nfs type=group
depend fmri=system/network/nis type=group
depend fmri=text/doctools type=group
depend fmri=text/less type=group
</pre>
<p>In our case we bundle the package in a package archive file that we copy into the image using <font face="courier new,courier,monospace">scp</font> and then install the group package.&nbsp; Doing this saves our developers a few minutes in getting what they need deployed, and that's one easy way we can show them value in using the cloud rather than our older lab infrastructure.&nbsp; It's certainly possible to do much more interesting customizations than this, so experiment and share your ideas, we're looking to make Heat much more useful on Solaris OpenStack as we move ahead.&nbsp; You can also talk to us at the OpenStack summit in Paris next week, a number of us will be manning the booth at various times when we're not in sessions at the design summit or the conference itself.<br /></p>
<p>Oh, and for those who are interested, the Solaris development cloud is now up past 100 users and has 5 compute nodes deployed.&nbsp; Still not large by any measure, but it's growing quickly and we're learning more about running OpenStack every day.<br /></p>https://blogs.oracle.com/dminer/entry/building_an_openstack_cloud_for3Building an OpenStack Cloud for Solaris Engineering, Part 4Dave Miner-Oracle 2014-09-19T15:10:39+00:002014-09-19T15:10:39+00:00<p>The prior parts of this series discussed the design and deployment of the undercloud nodes on which our cloud is implemented.&nbsp; Now it's time to configure OpenStack and turn the cloud on.&nbsp; Over on <a href="http://www.oracle.com/technetwork/index.html">OTN</a>, my colleague David Comay has published a general <a href="http://www.oracle.com/technetwork/articles/servers-storage-admin/getting-started-openstack-os11-2-2195380.html">getting started guide</a> that does a manual setup based on the <a href="http://www.oracle.com/technetwork/server-storage/solaris11/downloads/unified-archives-2245488.html">OpenStack all-in-one Unified Archive</a>, I recommend at least browsing through that for background that will come in handy as you deal with the inevitable issues that occur in running software with the complexity of OpenStack.&nbsp; It's even better to run through that single-node setup to get some experience before moving on to trying to build a multi-node cloud.<br /></p>
<p>For our purposes, I needed to script the configuration of a multi-node cloud, and that makes everything more complex, not the least of the problems being that you can't just use the loopback IP address (127.0.0.1) as the endpoint for every service.&nbsp; We had (compliments of my colleague Drew Fisher) a script for single-system configuration already, so I started with that and hacked away to build something that could configure each component correctly in a multi-node cloud.&nbsp; That Python script, called <font face="courier new,courier,monospace">havana_setup.py</font>, and some associated scripts are <a href="https://blogs.oracle.com/dminer/resource/openstack_scripts.tar.gz">available for download</a>.&nbsp; Here, I'll walk through the design and key pieces.</p>
<h2>Pre-work</h2>
<p> Before the proper OpenStack configuration process, you'll need to run the <font face="courier new,courier,monospace">gen_keys.py</font> script to create some SSH keys.&nbsp; These are used to secure the Solaris RAD (Remote Administration Daemon) transport that the Solaris Elastic Virtual Switch (EVS) controller uses to manage the networking between the Nova compute nodes and the Neutron controller node.&nbsp; The script creates <font face="courier new,courier,monospace">evsuser</font>, <font face="courier new,courier,monospace">neutron</font>, and <font face="courier new,courier,monospace">root</font> sub-directories in whatever location you run it, and this location will be referenced later in configuring the Neutron and Nova compute nodes, so you want to put it in a directory that's easily shared via NFS.&nbsp; You can (and probably should) unshare it after the nodes are configured, though.<br /></p>
<h2>Global Configuration <br /></h2>
<p>The first part of <font face="courier new,courier,monospace">havana_setup.py</font> is a whole series of global declarations that parameterize the services deployed on various nodes.&nbsp; You'll note that the PRODUCTION variable can be set to control the layout used; if its value is False, you'll end up with a single-node deployment.&nbsp; I have a couple of extra systems that I use for staging and this makes it easy to replicate the configuration well enough to do some basic sanity testing before deploying changes.,</p>
<pre>MY_NAME = platform.node()
MY_IP = socket.gethostbyname(MY_NAME)
# When set to False, you end up with a single-node deployment
PRODUCTION = True
CONTROLLER_NODE = MY_NAME
if PRODUCTION:
CONTROLLER_NODE = "controller.example.com"
DB_NODE = CONTROLLER_NODE
KEYSTONE_NODE = CONTROLLER_NODE
GLANCE_NODE = CONTROLLER_NODE
CINDER_NODE = CONTROLLER_NODE
NEUTRON_NODE = CONTROLLER_NODE
RABBIT_NODE = CONTROLLER_NODE
HEAT_NODE = CONTROLLER_NODE
if PRODUCTION:
GLANCE_NODE = "glance.example.com"
CINDER_NODE = "cinder.example.com"
NEUTRON_NODE = "neutron.example.com"
</pre>
<p>Next, we configure the main security elements, the root password for MySQL plus passwords and access tokens for Keystone, along with the URL's that we'll need to configure into the other services to connect them to Keystone.<br /></p>
<pre>SERVICE_TOKEN = "TOKEN"
MYSQL_ROOTPW = "mysqlroot"
ADMIN_PASSWORD = "adminpw"
SERVICE_PASSWORD = "servicepw"
AUTH_URL = "http://%s:5000/v2.0/" % KEYSTONE_NODE
IDENTITY_URL = "http://%s:35357" % KEYSTONE_NODE
</pre>
<p>The remainder of this section configures specifics of Glance, Cinder,&nbsp; Neutron, and Horizon.&nbsp; For Glance and Cinder, we provide the name of the base ZFS dataset that each will be using.&nbsp; For Neutron, the NIC, VLAN tag, and external network addresses, as well as the subnets for each of the two tenants we are providing in our cloud.&nbsp; We chose to have one tenant for developers in the organization that is funding this cloud, and a second tenant for other Oracle employees who want to experiment with OpenStack on Solaris; this gives us a way to grossly allocate resources between the two, and of course most go to the tenant paying the bill.&nbsp; The last element of each tuple in the tenant network list is the number of floating IP addresses to set as the quota for the tenant.&nbsp; For Horizon, the paths to a server certificate and key must be configured, but only if you're using TLS, and that's only the case if the script is run with PRODUCTION = True.&nbsp; The SSH_KEYDIR should be set to the location where you ran the <font face="courier new,courier,monospace">gen_keys.py</font> script, above.<br /></p>
<pre>GLANCE_DATASET = "tank/glance"
CINDER_DATASET = "tank/cinder"
UPLINK_PORT = "aggr0"
if PRODUCTION:
VXLAN_RANGE = "500-600"
TENANT_NET_LIST = [("tenant1", "192.168.66.0/24", 10),
("tenant2", "192.168.67.0/24", 60)]
else:
VXLAN_RANGE = "400-499"
TENANT_NET_LIST = [("tenant1", "192.168.70.0/24", 5),
("tenant2", "192.168.71.0/24", 5)]
EXTERNAL_GATEWAY = "10.134.12.1"
EXTERNAL_NETWORK_ADDR = "10.134.12.0/24"
EXTERNAL_NETWORK_VLAN_TAG = "12"
EXTERNAL_NETWORK_NAME = "external"
<p>SERVER_CERT = "/path/to/horizon.crt"
SERVER_KEY = "/path/to/horizon.key" </p><p>SSH_KEYDIR = "/path/to/generated/keys"
</p></pre>
<h2>Configuring the Nodes</h2>
<p>The remainder of <font face="courier new,courier,monospace">havana_setup.py</font> is a series of functions that configure each element of the cloud.&nbsp; You select which element(s) to configure by specifying command-line arguments.&nbsp; Valid values are <font face="courier new,courier,monospace">mysql</font>, <font face="courier new,courier,monospace">keystone</font>, <font face="courier new,courier,monospace">glance</font>, <font face="courier new,courier,monospace">cinder</font>, <font face="courier new,courier,monospace">nova-controller</font>, <font face="courier new,courier,monospace">neutron</font>, <font face="courier new,courier,monospace">nova-compute</font>, and <font face="courier new,courier,monospace">horizon</font>.&nbsp; I'll briefly explain what each does below.&nbsp; One thing to note is that each function first creates a backup boot environment so that if something goes wrong, you can easily revert to the state of the system prior to running the script.&nbsp; This is a practice you should always use in Solaris administration before making any system configuration changes.&nbsp; It also saved me a ton of time in development of the cloud, since I could reset within a minute or so every time I had a serious bug.&nbsp; Even our best re-deployment times with AI and archives are about 10 times that when you have to cycle through network booting.<br /></p>
<h3>mysql</h3>
<p>MySQL must be the first piece configured, since all of the OpenStack services use databases to store at least some of their objects.&nbsp; This function sets the root password and removes some insecure aspects of the default MySQL configuration.&nbsp; One key piece is that it removes remote root access; that forces us to create all of the databases in this module, rather than creating each component's database in its associated module.&nbsp; There may be a better way to do this, but since I'm not a MySQL expert in any way, that was the easiest path here.&nbsp; On review it seems like the enable of the mysql SMF service should really be moved over into the Puppet manifest from part 3.</p>
<h3>keystone</h3>
<p>The keystone function does some basic configuration, then calls the <font face="courier new,courier,monospace">/usr/demo/openstack/keystone/sample_data.sh</font> script to configure users, tenants, and endpoints.&nbsp; In our deployment I've customized this script a bit to create the two tenants rather than just one, so you may need to make some adjustments for your site; I have not included that customization in the downloaded files.<br /></p>
<h3>glance</h3>
<p>The glance function configures and starts the various glance services, and also creates the base dataset for ZFS storage; we turn compression on to save on storage for all the images we'll have here.&nbsp; If you're rolling back and re-running for some reason, this module isn't quite idempotent as written because it doesn't deal with the case where the dataset already exists, so you'd need to use <font face="courier new,courier,monospace">zfs destroy</font> to delete the glance dataset.<br /></p>
<h3>cinder</h3>
<p>Beyond just basic configuration of the cinder services, the cinder function also creates the base ZFS dataset under which all of the volumes will be created.&nbsp; We create this as an encrypted dataset so that all of the volumes will be encrypted, which Darren Moffat covers at more length in <a href="https://blogs.oracle.com/openstack/entry/cinder_volume_encryption_with_zfs" title="permalink">OpenStack Cinder Volume encryption with ZFS</a>. Here we use pktool to generate the wrapping key and store it in root's home directory.&nbsp; One piece of work we haven't yet had time to take on is adding our ZFS Storage Appliance as an additional back-end for Cinder.&nbsp; I'll post an update to cover that once we get it done.&nbsp; Like the glance function, this function doesn't deal propertly with the dataset already existing, so any rollback also needs to destroy the base dataset by hand.<br /></p>
<h3>nova_controller &amp; nova_compute<br /></h3>
<p>Since our deployment runs the nova controller services separate from the compute nodes, the nova_controller function is run on the controller node to set up the API, scheduler, and conductor services.&nbsp; If you combine the compute and controller nodes you would run this and then later run the nova_compute function.&nbsp; The nova_compute function also makes use of a couple of helper functions to set up the ssh configuration for EVS.&nbsp; For these functions to work properly you <b>must</b> run the neutron function on its designated node before running nova_compute on the compute nodes.<br /></p>
<h3>neutron</h3>
<p>The neutron setup function is by far the most complex, as we not only configure the neutron services, including the underlying EVS and RAD functions, but also configures the external network and the tenant networks.&nbsp; The external network is configured as a tagged VLAN, while the tenant networks are configured as VxLANs; you can certainly use VLANs or VxLANs for all of them, but this configuration was the most convenient for our environment.</p>
<h3>horizon</h3>
<p>For the production case, the horizon function just copies into place an Apache config file that configures TLS support for the Horizon dashboard and the server's certificate and key files.&nbsp; If you're using self-signed certificates, then the Apache <a href="http://httpd.apache.org/docs/2.2/ssl/ssl_faq.html">SSL/TLS Strong Encryption: FAQ</a> is a good reference on how to create them.&nbsp; For the non-production case, this function just comments out the pieces of the dashboard's local settings that enable SSL/TLS support.</p>
<h2>Getting Started</h2>
<p>Once you've run through all of the above functions from <font face="courier new,courier,monospace">havana_setup.py</font>, you have a cloud, and pointing your web browser at <font face="courier new,courier,monospace">http://&lt;your server&gt;/horizon</font> should display the login page, where you can login to the <font face="courier new,courier,monospace">admin</font> user with the password you configured in the global settings of <font face="courier new,courier,monospace">havana_setup.py</font>.</p>
<p>Assuming that works, your next step should be to upload an image.&nbsp; The easiest way to start is by downloading the <a href="http://www.oracle.com/technetwork/server-storage/solaris11/downloads/unified-archives-2245488.html">Solaris 11.2 Unified Archives</a>.&nbsp; Once you have an archive the upload can be done from the Horizon dashboard, but you'll find it easier to use the <font face="courier new,courier,monospace">upload_image</font> script that I've included in the download.&nbsp; You'll need to edit the environment variables it sets first, but it takes care of setting several properties on the image that are required by the Solaris Zones driver for Nova to properly handle deploying instances.&nbsp; Failure to set them is the single most common mistake that I and others have made in the early Solaris OpenStack deployments; when you forget and attempt to launch an instance, you'll get an immediate error, and the details from <font face="courier new,courier,monospace">nova show</font> will include the error:</p>
<pre><span style="font-size:9.0pt;font-family:&quot;Courier New&quot;">| fault&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | {"message": "No valid host was
found. ", "code": 500, "details": "&nbsp; File
\"/usr/lib/python2.6/vendor-packages/nova/scheduler/filter_scheduler.py\",
line 107, in schedule_run_instance |</span>
</pre>
<p> When you snapshot a deployed instance with Horizon or <font face="courier new,courier,monospace">nova image-create</font> the archive properties will be set properly, so it's only manual uploads in Horizon or with the <font face="courier new,courier,monospace">glance</font> command that need care.</p>
<p>There's one more preparation task to do: upload an ssh public key that'll be used to access your instances. Select <b>Access &amp; Security</b> from the list in the left panel of the Horizon Dashboard, then select the <b>Keypairs</b> tab, and click <b>Import Keypair</b>. &nbsp;You'll want to paste the contents of your <font face="courier new,courier,monospace"><span class="box code">~/.ssh/id_rsa.pub</span></font> into the Public Key field, and probably name your keypair the same as your username.</p>
<p>Finally, you are ready to launch instances.&nbsp;&nbsp; Select <b>Instances</b> in the Horizon Dashboard's left panel list, then click the <b>Launch Instance</b> button. &nbsp;Enter a name for the instance, select the Flavor, select <b>Boot from image</b>
as the Instance Boot Source, and select the image to use in deploying
the VM. &nbsp;The image will determine whether you get a SPARC or x86 VM and
what software it includes, while the flavor determines whether it is a
kernel zone or non-global zone, as well as the number of virtual CPUs
and amount of memory.&nbsp; The <b>Access &amp; Security</b> tab should default to selecting your uploaded keypair. &nbsp;You must go to the <b>Networking</b> tab and select a network for the instance. &nbsp;Then click <b>Launch</b> and the VM will be installed, you can follow progress by clicking on the instance name to see details and selecting the <b>Log</b> tab.&nbsp; It'll take a few minutes at present, in the meantime you can <b>Associate a Floating IP</b> in the <b>Actions</b> field. &nbsp;Pick any address from the list offered. &nbsp;Your instance will not be reachable until you've done this.</p>
<p>Once the instance has finished installing and reached the Active status, you can login to it. &nbsp;To do so, use <font face="courier new,courier,monospace"><span class="box code">ssh root@&lt;floating-ip-address&gt;</span></font>, which will login to the zone as root using the key you uploaded above.&nbsp; If that all works, congratulations, you have a functioning OpenStack cloud on Solaris!</p>
<p>In future posts I'll cover additional tips and tricks we've learned in operating our cloud.&nbsp; At this writing we're over 60 users and growing steadily, and it's been totally reliable over 3 months, with only outages for updates to the infrastructure.<br /></p>
<pre></pre>
<pre></pre>https://blogs.oracle.com/dminer/entry/building_an_openstack_cloud_for2Building an OpenStack Cloud for Solaris Engineering, Part 3Dave Miner-Oracle 2014-09-16T19:22:03+00:002014-09-16T19:22:03+00:00<p>At the end of <a href="https://blogs.oracle.com/dminer/entry/building_an_openstack_cloud_for1">Part 2</a>, we built the infrastructure needed to deploy the undercloud systems into our network environment.&nbsp; However, there's more configuration needed on these systems than we can completely express via Automated Installation, and there's also the issue of how to effectively maintain the undercloud systems.&nbsp; We're only running a half dozen initially, but expect to add many more as we grow, and even at this scale it's still too much work, with too high a probability of mistakes, to do things by hand on each system.&nbsp; That's where a configuration management system such as <a href="http://puppetlabs.com/puppet/puppet-open-source">Puppet</a> shows its value, providing us the ability to define a desired state for many aspects of many systems and have Puppet ensure that state is maintained.&nbsp; My team did a lot of work to <a href="https://blogs.oracle.com/observatory/entry/puppet_configuration_in_solaris">include Puppet in Solaris 11.2</a> and extend it to manage most of the major subsystems in Solaris, so the OpenStack cloud deployment was a great opportunity to start working with another shiny new toy.</p>
<h2>Configuring the Puppet Master <br /></h2>
<p>One feature of the Puppet integration with Solaris is that the Puppet configuration is expressed in SMF, and then translated by the new <a href="https://blogs.oracle.com/SolarisSMF/entry/introducing_smf_stencils">SMF Stencils</a> feature to settings in the usual /etc/puppet/puppet.conf file.&nbsp; This makes it possible to configure Puppet using SMF profiles at deployment time, and the examples in Part 2 showed this for the clients.&nbsp; For the master, we apply the profile below:</p>
<pre>&lt;!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"&gt;
&lt;!--
This profile configures the Puppet master
--&gt;
&lt;service_bundle type="profile" name="puppet"&gt;
&lt;service version="1" type="service" name="application/puppet"&gt;
&lt;instance enabled="true" name="master"&gt;
&lt;property_group type="application" name="config"&gt;
&lt;propval name="server" value="puppetmaster.example.com"/&gt;
&lt;propval name="autosign" value="/etc/puppet/autosign.conf"/&gt;
&lt;/property_group&gt;
&lt;/instance&gt;
&lt;/service&gt;
&lt;/service_bundle&gt;
</pre>
<p>The interesting setting is the autosign configuration, which allows new clients to have their certificates automatically signed and accepted by the Puppet master.&nbsp; This isn't strictly necessary, but makes operation a little easier when you have a reasonably secure network and you're not handing out any sensitive configuration via Puppet.&nbsp; We use an autosign.conf that looks something like:</p>
<pre>*.example.com</pre>
<p>This means that we're accepting any system that identifies as being in the example.com domain.&nbsp; The main pain with autosigning is that if you reinstall any of the systems and you're using self-generated certificates on the clients, you need to clean out the old certificate before the new one will be accepted; this means issuing a command on the master like:</p>
<pre># puppet cert clean client.example.com</pre>
<p>There are lots of options in Puppet related to certificates and more sophisticated ways to manage them, but this is what we're doing for now.&nbsp; We have filed some enhancement requests to implement ways of integrating Puppet client certificate delivery and signing with Automated Installation, which would make using the two together much more convenient.</p>
<h2>Writing the Puppet Manifests<br /></h2>
<p>Next, we implemented a small <a href="http://mercurial.selenic.com/">Mercurial</a> source repository to store the Puppet manifests and modules.&nbsp; Using a source control system with Puppet is a highly recommended practice, and Mercurial happens to be the one we use for Solaris development, so it's natural for us in this case.&nbsp; We configure <font face="courier new,courier,monospace">/etc/puppet</font> on the Puppet master as a child repository of the main Mercurial repository, so when we have new configuration to apply it's first checked into the main repository and then pulled into Puppet via <font face="courier new,courier,monospace">hg pull -u</font>, then automatically applied as each client polls the master.&nbsp; Our repository presently contains the following:</p>
<pre>./manifests
./manifests/site.pp
./modules
./modules/nameservice
./modules/nameservice/manifests
./modules/nameservice/manifests/init.pp
./modules/nameservice/files
./modules/nameservice/files/prof_attr-zlogin
./modules/nameservice/files/user_attr
./modules/nameservice/files/policy.conf
./modules/nameservice/files/exec_attr-zlogin
./modules/ntp
./modules/ntp/manifests
./modules/ntp/manifests/init.pp
./modules/ntp/files
./modules/ntp/files/ntp.conf
</pre>
<p>An example tar file with all of the above is <a href="https://blogs.oracle.com/dminer/resource/puppet.tar.gz">available for download</a>.</p>
<p>The site manifest&nbsp; starts with: <br /></p>
<pre>include ntp
include nameservice
</pre>
<p>The ntp module is the canonical example of Puppet, and is really important for the OpenStack undercloud, as it's necessary for the various nodes to have a consistent view of time in order for the security certificates issued by Keystone to be validated properly.&nbsp; I'll describe the nameservice module a little later in this post.</p>
<p>Since most of our nodes are configured identically, we can use a default node definition to configure them.&nbsp; The main piece is configuring <a href="http://docs.oracle.com/cd/E36784_01/html/E37516/gdysx.html#scrolltoc">Datalink Multipathing</a> (DLMP), which provides us additional bandwidth and higher availability than a single link.&nbsp; We can't yet configure this using SMF, so the Puppet manifest:</p>
<ul>
<li>Figures out the IP address the system is using with some embedded Ruby</li>
<li>Removes the net0 link and creates a link aggregation from net0 and net1</li>
<li>Enables active probing on the link aggregation, so that it can detect upstream failures on the switches that don't affect link state signaling (which is also used, and is the only means unless probing is enabled)<br /></li>
<li>Configures an IP interface and the same address on the new aggregation link<br /></li>
<li>Restricts Ethernet autonegotiation to 1 Gb to work around issues we have with these systems and the switches/cabling we're using the in the lab; without this, we get 100 Mb speeds negotiated about 50% of the time, and that kills performance.</li>
</ul>You'll note several uses of the <font face="courier new,courier,monospace">require</font> and <font face="courier new,courier,monospace">before</font> statements to ensure the rules are applied in the proper order, as we need to tear down the net0 IP interface before it can be moved into the aggregation, and the aggregation needs to be configured before the IP objects on top of it.<br />
<pre><font size="3">node default {
$myip = inline_template("&lt;% _erbout.concat(Resolv::DNS.open.getaddress('$fqdn').to_s) %&gt;")
# Force link speed negotiation to be at least 1 Gb
link_properties { "net0":
ensure =&gt; present,
properties =&gt; { en_100fdx_cap =&gt; "0" },
}
link_properties { "net1":
ensure =&gt; present,
properties =&gt; { en_100fdx_cap =&gt; "0" },
}
link_aggregation { "aggr0" :
ensure =&gt; present,
lower_links =&gt; [ 'net0', 'net1' ],
mode =&gt; "dlmp",
}
link_properties { "aggr0":
ensure =&gt; present,
require =&gt; Link_aggregation['aggr0'],
properties =&gt; { probe-ip =&gt; "+" },
}
ip_interface { "aggr0" :
ensure =&gt; present,
require =&gt; Link_aggregation['aggr0'],
}
ip_interface { "net0":
ensure =&gt; absent,
before =&gt; Link_aggregation['aggr0'],
}
address_object { "net0":
ensure =&gt; absent,
before =&gt; Ip_interface['net0'],
}
address_object { 'aggr0/v4':
require =&gt; Ip_interface['aggr0'],
ensure =&gt; present,
address =&gt; "${myip}/24",
address_type =&gt; "static",
enable =&gt; "true",
}
}</font>
</pre>
<p>The controller node declaration includes all of the above functionality, but also adds these elements to keep rabbitmq running and install the mysql database.<br /></p>
<pre> service { "application/rabbitmq" :
ensure =&gt; running,
}
package { "database/mysql-55":
ensure =&gt; installed,
}
</pre>
<p>The database installation could have been part of the AI derived manifest as well, but it works just as well here and it's convenient to do it this way when I'm setting up staging systems to test builds before we upgrade.</p>
<p>The nameservice Puppet module is shown below.&nbsp; It's handling both nameservice and <a href="http://docs.oracle.com/cd/E36784_01/html/E37123/index.html">RBAC</a> (Role-based Access Control) configuration:</p>
<pre>class nameservice {
dns { "openstack_dns":
search =&gt; [ 'example.com' ],
nameserver =&gt; [ '10.2.3.4, '10.6.7.8' ],
}
service { "dns/client":
ensure =&gt; running,
}
svccfg { "domainname":
ensure =&gt; present,
fmri =&gt; "svc:/network/nis/domain",
property =&gt; "config/domainname",
type =&gt; "hostname",
value =&gt; "example.com",
}
# nameservice switch
nsswitch { "dns + ldap":
default =&gt; "files",
host =&gt; "files dns",
password =&gt; "files ldap",
group =&gt; "files ldap",
automount =&gt; "files ldap",
netgroup =&gt; "ldap",
}
# Set user_attr for administrative accounts
file { "user_attr" :
path =&gt; "/etc/user_attr.d/site-openstack",
owner =&gt; "root",
group =&gt; "sys",
mode =&gt; 644,
source =&gt; "puppet:///modules/nameservice/user_attr",
}
# Configure zlogin access
file { "site-zlogin" :
path =&gt; "/etc/security/prof_attr.d/site-zlogin",
owner =&gt; "root",
group =&gt; "sys",
mode =&gt; 644,
source =&gt; "puppet:///modules/nameservice/prof_attr-zlogin",
}
file { "zlogin-exec" :
path =&gt; "/etc/security/exec_attr.d/site-zlogin",
owner =&gt; "root",
group =&gt; "sys",
mode =&gt; 644,
source =&gt; "puppet:///modules/nameservice/exec_attr-zlogin",
}
file { "policy.conf" :
path =&gt; "/etc/security/policy.conf",
owner =&gt; "root",
group =&gt; "sys",
mode =&gt; 644,
source =&gt; "puppet:///modules/nameservice/policy.conf",
}
}
</pre>
<p>You may notice that the nameservice configuration here is exactly the same as what we provided in the SMF profile in part 2.&nbsp; We include it here because it's configuration we anticipate changing someday and we won't want to re-deploy the nodes.&nbsp; There are ways we could prevent the duplication, but we didn't have time to spend on it right now and it also demonstrates that you could use a completely different configuration in operation than at deployment/staging time.</p>
<h2>What's with the RBAC configuration? <br /></h2>
<p>The RBAC configuration is doing two things, the first being configuring the user accounts of the cloud administrators for administrative access on the cloud nodes.&nbsp; The <font face="courier new,courier,monospace">user_attr</font> file we're distributing confers the <font face="courier new,courier,monospace">System Adminstrator</font> and <font face="courier new,courier,monospace">OpenStack Management</font> profiles, as well as access to the root role (oadmin is just an example account in this case):</p>
<pre>oadmin::::profiles=System Administrator,OpenStack Management;roles=root
</pre>
<p>As we add administrators, I just need to add entries for them to the above file and they get the required access to all of the nodes.&nbsp; Note that this doesn't directly provide administrative access to OpenStack's CLI's or its Dashboard, that's configured within OpenStack.<br /></p>
<p>A limitation of the OpenStack software we include in Solaris 11.2 is that we don't provide the ability to connect to the guest instance consoles, an important feature that's being worked on.&nbsp; The <font face="courier new,courier,monospace">zlogin User</font> profile is something I created to work around this problem and allow our cloud users to get access to the consoles, as this is often needed in Solaris development and testing.&nbsp; First, the profile is defined by a prof_attr file with the entry:</p>
<pre>zlogin User:::Use zlogin:auths=solaris.zone.manage</pre>
<p>We also need an exec_attr file to ensure that zlogin is run with the needed uid and privileges:</p>
<pre> zlogin User:solaris:cmd:RO::/usr/sbin/zlogin:euid=0;privs=ALL</pre>
<p>Finally, we modify the RBAC policy file so that all users are assigned to the zlogin User profile:</p>
<pre>PROFS_GRANTED=zlogin User,Basic Solaris User</pre>
<p>The result of all this is that a user can obtain access to their specific OpenStack guest instance via login to the compute node on which the guest is running, and runing a command such as:</p>
<pre>$ pfexec zlogin -C instance-0000abcd</pre>
<p>At this point we have the undercloud nodes fully configured to support our OpenStack deployment.&nbsp; In part 4, we'll look at the scripts used to configure OpenStack itself.</p>
<p><br /></p>https://blogs.oracle.com/dminer/entry/building_an_openstack_cloud_for1Building an OpenStack Cloud for Solaris Engineering, Part 2Dave Miner-Oracle 2014-09-02T22:50:36+00:002014-09-02T22:50:36+00:00<pre></pre>
<p>
Continuing from where I left off with <a href="https://blogs.oracle.com/dminer/entry/building_an_openstack_cloud_for">part 1</a> of this series, in this posting I'll discuss the elements that we put in place to deploy the <a href="http://openstack.org">OpenStack</a> cloud infrastructure, also known as the <i>undercloud</i>.</p>
<p>The general philosophy here is to automate everything, both because it's
a best practice and because this cloud doesn't have any dedicated staff
to manage it; we're doing it ourselves in order to get first-hand
operational experience that we can apply to improve both Solaris and
OpenStack.&nbsp; As I said in part 1, we don't have an HA requirement at this
point, but we'd like to keep any outages, both scheduled and
unscheduled, to less than a half hour, so redeploying a failed node
should take no more than 20 minutes.&nbsp; The pieces that we're using are:
</p>
<ul>
<li>Automated Installation services and manifests to deploy Solaris</li>
<li>SMF profiles to configure system services</li>
<li>IPS site packages installed as part of the AI deployment to automate some first-boot configuration</li>
<li>A Puppet master to provide initial and ongoing configuration automation</li>
</ul>
<p>I'll elaborate on the first two below, and discuss Puppet in the next posting.&nbsp; The IPS site packages we are using are specific to Oracle's environment so I won't be covering those in detail.</p>
<p>Sanitized versions of the manifests and profiles discussed below are available for <a href="https://blogs.oracle.com/dminer/resource/ai_openstack.tar.gz">download as a tar file</a>.<br /></p>
<h2>Automated Installation</h2>
<p>Building the undercloud nodes means we're doing bare-metal provisioning, so we'll be using the <a href="http://docs.oracle.com/cd/E36784_01/html/E36800/useaipart.html">Automated Installation</a> (AI) feature in Solaris 11.&nbsp; Most of the OpenStack services could run in <a href="http://docs.oracle.com/cd/E36784_01/html/E37629/index.html">kernel zones</a>, or even non-global zones, but we're planning for larger scale and want to have some extra horsepower.&nbsp; Therefore we opted not to go in that direction for now, but it may well be an option we use later for some services.</p>
<p>I already had an existing AI server in this cloud's lab, and it provides services to systems that aren't part of this cloud.&nbsp; As we release each development build of a Solaris 11 update or Solaris 12 there's a new service generated on it.&nbsp; The pace of evolution of this cloud is likely to be different from those other systems as well, so that led me to create two new AI services specifically for the cloud; we can make these services aliases of existing services so we don't need to bother replicating the boot image, thus the commands look like (output ellided):<br /></p>
<pre># installadm create-service -n cloud-i386 --aliasof solaris11_2-i386
# installadm create-service -n cloud-sparc --aliasof solaris11_2-sparc
</pre>
<p>The next step is setting up the manifests that specify the installation.&nbsp; For this, I've taken the default derived manifest that we install for services and modified it to:</p>
<ol>
<li> Specify a custom package list</li>
<li>Lay out all of the storage</li>
<li>Select the package repository based on Solaris release</li>
<li>Install a Unified Archive rather than a package set based on a boot argument</li>
</ol>
<p>You can <a href="https://blogs.oracle.com/dminer/resource/ai_openstack.tar.gz">download</a> the complete manifest, I'll discuss the various customizations here.</p>
<p>The package list we're explicitly installing is below, there are of course a number of other packages pulled in as dependencies, so this expands out to just over 500 packages installed (perhaps not surprisingly, about 35% are Python libraries):<br /></p>
<pre> pkg:/entire
pkg:/group/system/solaris-minimal-server
pkg:/cloud/openstack
pkg:/database/mysql-55/client
pkg:/diagnostic/snoop
pkg:/library/python/markupsafe
pkg:/library/python/python-mysql
pkg:/library/python/pip
pkg:/naming/ldap
pkg:/network/amqp/rabbitmq
pkg:/network/rsync
pkg:/network/ssh
pkg:/security/nss-utilities
pkg:/service/network/ntp
pkg:/service/security/kerberos-5
pkg:/system/fault-management/smtp-notify
pkg:/system/file-system/autofs
pkg:/system/file-system/nfs
pkg:/system/management/fwupdate
pkg:/system/management/hwmgmtcli
pkg:/system/management/ilomconfig
pkg:/system/management/puppet
pkg:/system/management/rad/module/rad-evs-controller
pkg:/system/network/bpf
pkg:/system/network/nis
pkg:/system/zones/brand/brand-solaris-kz
pkg:/text/doctools
pkg:/text/less
pkg://openstacklab/site-custom
pkg://openstacklab/ldapcert
</pre>
<p>We start with solaris-minimal-server in order to build an effectively minimized environment.&nbsp; We've chosen to install the same package set on all nodes so that any of them can be easily repurposed to a different role in the cloud if needed, so the openstack group package is used rather than the packages for the various OpenStack services.&nbsp; We'll be using MySQL as the database, so need its client package.&nbsp; snoop is there for network diagnostics (yes, we should use tshark instead but I'm old-school :-), some Python packages that we need to support OpenStack, as well as RabbitMQ as that's our message broker.&nbsp; We use LDAP for authentication so that's included.&nbsp; I find rsync convenient for caching crash dumps off to other systems for examination.&nbsp; ssh is needed for remote access.&nbsp; nss-utilities are needed for some LDAP configuration.&nbsp; OpenStack needs consistent time, so NTP is required.&nbsp; We use Kerberos for some NFS access so that's included, along with the automounter and NFS client.&nbsp; We want to use SMTP notifications for any fault management events, so include it.&nbsp; The utilities to manage Oracle hardware may come in handy, so we include them.&nbsp; Puppet is going to provide ongoing configuration management, so it's included.&nbsp; We need rad-evs-controller to back-end our Neutron driver.&nbsp; bpf is listed only because of a missing dependency in another package that causes runaway console messages from the DLMP daemon; that's being fixed.&nbsp; The NIS package provides some things that LDAP needs.&nbsp; We're using kernel zones as the primary OpenStack guest, so need that zone brand installed.&nbsp; The doctools package provides the man command, don't want to be caught without a man page when you need it!&nbsp; less is there because it's better than more.&nbsp; Finally, we install a couple of site packages, one that does some general customizations, another that delivers the base certificate needed for TLS access to our LDAP servers.</p>
<p>The storage layout we standardized on for the undercloud is to have a two-way mirror for the root pool, formed out of two of the smallest disks (usually 300 GB on the systems we're using), with any remaining disks in a separate pool, called <font face="courier new,courier,monospace">tank</font> on all of the systems, that can be used for other purposes.&nbsp; On the Cinder node, it's where we put all the ZFS iSCSI targets; in the case of Glance it's where we store the images.&nbsp; We're also planning to use it for Swift services on various nodes, but we haven't deployed Swift yet.&nbsp; The <font face="courier new,courier,monospace">tank</font> pool gets built with varying amounts of redundancy based on the number of disks.&nbsp; This logic is all in the last 60 lines of the manifest script.&nbsp; It's an interesting example of using the derived manifest features to do some reasonably complex customization for individual nodes.<br /></p>
<p>We internally have separate repositories for Solaris 11 and Solaris 12, so the manifest defaults to Solaris 12 and if it determines we've booted Solaris 11 to install, then it uses a different repository:</p>
<pre>if [[ $(uname -r) = 5.11 ]]; then
aimanifest set /auto_install/ai_instance/software[@type="IPS"]/source/publisher[@name="solaris"]/origin@name http://example.com/solaris11
fi
</pre>
<p>The last trick I added was the ability to select a Unified Archive to install instead of the packages.&nbsp; We'll be using archives as the backup/recovery mechanism for the infrastructure, so this provides a faster way to deploy nodes when we already have the desired archive available.&nbsp; On a SPARC system you'd select this using a boot command like:</p>
<pre>ok boot net:dhcp - install archive_uri=http://example.com/openstack_archive.uar</pre>
<p>On an x86 system you'd add this as <font face="courier new,courier,monospace">-B archive_uri=&lt;uri&gt;</font> to the $multiboot line in grub.cfg.</p>
<p>The code for this in the script looks like:</p>
<pre>if [[ ${SI_ARCH} = sparc ]]; then
ARCHIVE_URI=$(prtconf -vp | nawk \
'/bootargs.*archive_uri=/{n=split($0,a,"archive_uri=");split(a[2],b);split(b[1],c,"'\''");print c[1]}')
else
ARCHIVE_URI=$(devprop -s archive_uri)
fi
if [[ -n "$ARCHIVE_URI" ]]; then
# Replace package software section with archive
aimanifest delete software
swpath=$(aimanifest add -r /auto_install/ai_instance/software@type ARCHIVE)
aimanifest add $swpath/source/file@uri $ARCHIVE_URI
inspath=$(aimanifest add -r $swpath/software_data@action install)
aimanifest add $inspath/name global
</pre>
<p>...</p>
<p>Once we have the manifest, it's a simple matter to make it the default manifest for both of the cloud services:</p>
<pre># installadm create-manifest -n cloud-i386 -d -f havana.ksh
# installadm create-manifest -n cloud-sparc -d -f havana.ksh
</pre>
<p>Each of the systems we're including in the cloud infrastructure are assigned to the appropriate AI service with a command such as:</p>
<pre># installadm create-client -n cloud-sparc -e &lt;mac address&gt;</pre>
<h2>SMF Configuration Profiles <br /></h2>
<p>But before we go on to installing the systems, we also want to provide SMF (Service Management Facility) configuration profiles to automate the initial system configuration; otherwise, we'll be faced with running the interactive sysconfig tool during the initial boot.&nbsp; For this deployment, we have a somewhat unusual twist, in that there is configuration we'd like to share between the infrastructure nodes and guests since they are ultimately all nodes on the Oracle internal network.&nbsp; Also, for maximum flexibility and reuse, the configuration is expressed by multiple profiles, with each designed to configure only some aspects of the system.&nbsp; In our case, we have a directory structure on the AI server that looks like:</p>
<blockquote>
<pre>infrastructure.xml
puppet.xml
users.xml
common/basic.xml
common/dns.xml
common/ldap.xml</pre>
</blockquote>
<p>The first three are specific to the infrastructure nodes.&nbsp; The <font face="courier new,courier,monospace">infrastructure.xml</font> profile provides the fixed network configuration, along with coreadm setup and fault management notifications; we use SMTP notifications to alert us to any faults from the system.&nbsp; The <font face="courier new,courier,monospace">puppet.xml</font> profile configures the puppet agents with the name of the master node.&nbsp; The <font face="courier new,courier,monospace">users.xml</font> profile configures the root account as a role and sets its password, and also sets up a local system administrator account that's meant to be used in case of networking issues that prevent our administrators from using their normal user accounts.<br /></p>
<p>The three profiles under the common directory are also used to configure guest instances in our cloud.&nbsp; I'll show how that's done later in this series, but it's important that they be under a separate directory.&nbsp; <font face="courier new,courier,monospace">basic.xml</font> configures the system's timezone, default locale, keyboard layout, and console terminal type.&nbsp; <font face="courier new,courier,monospace">dns.xml</font> configures the DNS resolver, and <font face="courier new,courier,monospace">ldap.xml</font> configures the LDAP client.</p>
<p>We load each of these into the AI services with the command:</p>
<pre># installadm create-profile -n cloud-sparc -f &lt;file name&gt;</pre>
<p>The important aspect of the above command is that no criteria are specified for the profiles, which means that they are applied to all clients of the service.&nbsp; This also means that they must be disjoint; no two profiles can attempt to configure the same property on the same service, otherwise SMF will not apply the profiles that conflict.</p>
<p>Once all that's done, we can see the results:</p>
<pre># installadm list -p -m -n cloud-sparc
Service Name Manifest Name Type Status Criteria
------------ ------------- ---- ------ --------
cloud-sparc havana.ksh derived default none
Service Name Profile Name Criteria
------------ ------------ --------
cloud-sparc basic.xml none
dns.xml none
infrastructure.xml none
ldap.xml none
puppet.xml none
users.xml none </pre>At this point we've got enough infrastructure implemented to install the OpenStack undercloud systems.&nbsp; In the next posting I'll cover the Puppet manifests we're using; after that we'll get into configuring OpenStack itself.<br />https://blogs.oracle.com/dminer/entry/building_an_openstack_cloud_forBuilding an OpenStack Cloud for Solaris Engineering, Part 1Dave Miner-Oracle 2014-08-22T19:52:06+00:002014-08-22T19:52:06+00:00<p>One of the signature features of the recently-released <a href="http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.html">Solaris 11.2</a> is the <a href="http://www.openstack.org">OpenStack</a> cloud computing platform.&nbsp; Over on the <a href="https://blogs.oracle.com/openstack/">Solaris OpenStack blog</a> the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.&nbsp; In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.&nbsp; But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.<br /><br />In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.&nbsp; But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.&nbsp; We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).&nbsp; Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.&nbsp; Sounds like pretty much every traditional IT shop, right?&nbsp; Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.&nbsp; As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.<br /><br />At this point, let's acknowledge one fact: deploying OpenStack is <b>hard</b>.&nbsp; It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.&nbsp; The web UI, Horizon, doesn't often do a good job of providing detailed errors.&nbsp; Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.&nbsp; I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.&nbsp; The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the <a href="http://www.oracle.com/technetwork/server-storage/solaris11/downloads/unified-archives-2245488.html">OpenStack Unified Archive</a> from <a href="http://www.oracle.com/technetwork/index.html">OTN</a> to get your hands on a single-system demonstration environment.&nbsp; I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.&nbsp; For some situations, it may in fact be all you ever need.&nbsp; If so, you don't need to read the rest of this series of posts!<br /><br />In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.&nbsp; We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.&nbsp; We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.&nbsp; One thing we don't have is a requirement for extremely high availability, at least at this point.&nbsp; We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.&nbsp; Thus I didn't need to spend effort on trying to get high availability everywhere.<br /><br />The diagram below shows our initial deployment design.&nbsp; We're using six systems, most of which are x86 because we had more of those immediately available.&nbsp; All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).&nbsp; A separate VLAN provides &quot;public&quot; (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks.<br /></p>
<p><img align="middle" alt="Solaris cloud diagram" src="https://blogs.oracle.com/dminer/resource/solaris_cloud2.png" /><br /></p>
<p>One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.&nbsp; We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.<br /><br />I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.&nbsp; The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.<br /><br />There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.&nbsp; We don't have any need for firewalling in this deployment so we're not doing so.&nbsp; We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.&nbsp; Each tenant has one VxLAN defined initially, but we can of course add more.&nbsp; Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.<br /><br />Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.&nbsp; We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.<br /><br />My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.&nbsp; After that we'll get into the specifics of configuring and running OpenStack itself.
</p>https://blogs.oracle.com/dminer/entry/detroit_solaris_11_forum_februaryDetroit Solaris 11 Forum, February 8Dave Miner-Oracle 2012-01-31T21:20:15+00:002012-01-31T21:21:53+00:00<p>I'm just posting this quick note to help publicize the <a href="http://www.oracle.com/us/dm/h2fy11/35622-nafm10128512mpp008-oem-1432458.html">Oracle Solaris 11 Technology Forum</a> we're holding in the Detroit area next week.&nbsp; There's still time to register and come get a half-day overview of the great new stuff in Solaris 11.&nbsp; The &quot;special treat&quot; that's not mentioned in the link is that I'll be joining Jeff Victor as a speaker.&nbsp; Looking forward to being back in my home state for a quick visit, and hope I'll see some old friends there!</p>https://blogs.oracle.com/dminer/entry/solaris_at_lisa_2011Solaris at LISA 2011Dave Miner-Oracle 2011-11-22T09:53:45+00:002011-11-22T09:53:45+00:00<p>As is our custom, the Solaris team will be out in force at the USENIX LISA conference; this year it's in Boston so it's sort of a home game for me for a change.&nbsp; The big event we'll have is Tuesday, December 6, the <a href="http://blogs.oracle.com/solaris/entry/oracle_solaris_11_summit_day1">Oracle Solaris 11 Summit Day</a>.&nbsp; We'll be covering deployment, ZFS, Networking, Virtualization, Security, Clustering, and how Oracle apps run best on <a href="http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html">Solaris 11</a>.&nbsp; We've done this the past couple of years and it's always a very full day.</p><p>On Wednesday, December 7, we've got a couple of BOF sessions scheduled back-to-back.&nbsp; At 7:30 we'll have the ever-popular engineering panel, with all of us who are speaking at Tuesday's summit day there for a free-flowing discussion of all things Solaris.&nbsp; Following that, Bart &amp; I are hosting a second BOF at 9:30 to talk more about deployment for clouds and traditional data centers.</p><p>Also, on Wednesday and Thursday we'll have a booth at the exhibition where there'll be demos and just a general chance to talk with various Solaris staff from engineering and product management.</p><p>The conference program looks great and I look forward to seeing you there!</p>https://blogs.oracle.com/dminer/entry/virtually_the_fastest_way_toVirtually the fastest way to try Solaris 11 (and Solaris 10 zones)Dave Miner-Oracle 2011-11-17T17:30:00+00:002011-11-17T17:30:00+00:00<p>If you're looking to try out Solaris 11, there are the standard ISO and USB image downloads on the <a href="http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html">main page</a>.&nbsp; Those are great if you're looking to install Solaris 11 on hardware, and we hope you will.&nbsp; But if you take the time to look down the page, you'll find a link off to the <a href="http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html">Oracle Solaris 11 Virtual Machine downloads.</a>&nbsp; There are two downloads there:</p><ol><li>A pre-built Solaris 10 zone</li><li>A pre-built Solaris 11 VM for use with VirtualBox</li></ol><p>If you're looking to try Solaris 11 on x86, the second one is what you want.&nbsp; Of course, this assumes you have <a href="http://www.virtualbox.org/">VirtualBox</a> already (and if you don't, now's the time to try it, it's a terrific free desktop virtualization product).&nbsp; Once you complete the 1.8 GB download, it's a simple matter of unzipping the archive and a few quick clicks in VirtualBox to get a Solaris 11 desktop booted.&nbsp; While it's booting, you'll get to run through the new system configuration tool (that'll be the subject of a future posting here) to configure networking, a user account, and so on.</p><p>So what about that pre-built Solaris 10 zone download?&nbsp; It's a really simple way to get yourself acquainted with the <a href="http://download.oracle.com/docs/cd/E23824_01/html/821-1460/gjfdr.html#scrolltoc">Solaris 10 zones</a> feature, which you may well find indispensible in transitioning an existing Solaris 10 infrastructure to Solaris 11.&nbsp; Once you've downloaded the file, it's a self-extracting executable that'll configure the zone for you, all you have to supply is an IP address for the zone.&nbsp; It's really quite slick!</p><p>I expect we'll do a lot more pre-built VM's and zones going forward, as that's a big part of being a cloud OS; if there's one that would be really useful for you, let us know.</p>https://blogs.oracle.com/dminer/entry/solaris_11_technology_forums_nycSolaris 11 Technology Forums, NYC and BostonDave Miner-Oracle 2011-11-15T16:36:28+00:002011-11-15T16:36:29+00:00By now you're certainly aware that we released <a href="http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.html?ssSourceSiteId=ocomen">Solaris 11</a>; I was on vacation during the launch so haven't had time to write any material related to the Solaris 11 installers, but will get to that soon.&nbsp; Following onto the release, we're scheduling events in various locations around the world to talk about some of the key new features in Solaris 11 in more depth.&nbsp; In the northeast US, we've scheduled technology forums in <a href="http://www.oracle.com/us/dm/h2fy11/21281-nafm10128512mpp016-oem-525336.html">New York City on November 29</a>, and <a href="http://www.oracle.com/us/dm/h2fy11/21285-nafm10128512mpp013-oem-525338.html">Burlington, MA on November 30</a>.&nbsp; Click on those links to go to the detailed info and registration.&nbsp; I'll be one of the speakers at both of them, so hope to see you there!https://blogs.oracle.com/dminer/entry/solaris_11_express_interactive_installationSolaris 11 Express Interactive InstallationDave Miner-Oracle 2010-11-16T14:40:16+00:002010-11-16T22:39:55+00:00<p>
One thing I didn't note in my <a href="/dminer/entry/oracle_solaris_11_express_2010">previous entry</a> on the <a href="http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.html">Solaris 11 Express 2010.11 release</a> is that there are some new developments in installation since the last available builds of OpenSolaris.&nbsp; This post just discusses the interactive installation options, while a subsequent entry will discuss the Automated Installer.</p>
<p>Before digging into the details, it's probably useful to explain the philosophy of the interactive installers a bit for those encountering them for the first time, as it is somewhat of a departure from Solaris 10 and prior.&nbsp; Our basic guiding principle is probably best summarized as, &quot;Get the system installed and get out of the way.&quot;&nbsp; To elaborate a bit, the idea is to collect a minimal amount of configuration required to make the installed system functional, execute the install quickly, and let the user get on with using the system.&nbsp; That means that a lot of the configuration you might have been asked about in past Solaris releases, such as Kerberos or NFS domains, or installing additional, layered software, are just not present.&nbsp; You're asked only to select a disk, partition it a bit if you want, provide timezone and locale, and create a user account.&nbsp; You're also not prompted to interactively select the software to be installed.&nbsp; Instead, the software that's present on the media is what's installed, providing a useful starting point at first boot.&nbsp; From there, you can use tools like the <font face="courier new,courier,monospace">pkg</font> CLI or the Package Manager GUI to customize software to your heart's content, all installed from the convenience of a software repository on the network.<br /></p>
<p>There are several reasons why we think this shift is appropriate.&nbsp; First, many of the configuration settings that were prompted for in the past were of interest to only small minorities of users.&nbsp; That means we were making it harder for the majority, which is almost always a bad choice.&nbsp; Second, we've put in a concerted effort over the past 5+ years to make Solaris configured more correctly to start with, and more capable of self-configuring, so that more users get the best results, not just those who can figure out the right knobs to twist.&nbsp; The end results should be better for all of us in the Solaris ecosystem, as behavior will be more consistent and predictable.&nbsp; Finally, in terms of software selection, we've reached the point where the commonly-available media format (DVD) just isn't large enough to incorporate all the software we want to provide as part of the product - we've just plain outstripped the rate of improvement in software compression technology.&nbsp; It's well past time that we oriented Solaris towards a network-centric software delivery paradigm.<br /></p>
<div class="zemanta-pixie">
<h2>Text Installer<br /></h2>
<p>The most obvious difference to OpenSolaris users is the addition of the Text Installer, a curses-based interactive UI designed to run comfortably on all those servers out there that have only serial consoles.&nbsp; Those that were following the OpenSolaris development train did see a late preview of this from the project team back around build 134, but S11 Express is the first release that includes this installer.&nbsp; This now means that there is an interactive install option for SPARC users, as the GUI install is offered only on the x86 live CD.</p>
<p> </p>
<p>Philosophically, this UI shares a fair amount with the GUI: it's a fairly streamlined experience that doesn't allow customization of the software payload, but does allow a little more freedom in disk configuration (most notably, the ability to preserve existing VTOC slices).&nbsp; Like the GUI, the installation is a direct copy of the media contents, so what is included on the media defines the installation.</p>
<p>Initially, we've opted to include this installer only on a new, separate ISO download, identified as Text Install on the <a href="http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html">downloads</a> page.&nbsp; This image might be more accurately called &quot;Server Install&quot;, as that's what it really is meant to be: a generic server installation that includes most, if not all, of the Solaris server elements, but omits the GNOME desktop and related applications.&nbsp; If this is the image you downloaded and installed but you really wanted the GNOME desktop (easy to do since it's the first image on the page), then the easy solution is to install the package set that appears on the live CD media; you can accomplish that with the command <font face="monospace">pkg install slim_install</font>, slim_install being the IPS <i>group package</i> that we use to define the live CD contents.&nbsp; Incidentally, the group package that defines the text install media contents is the <font face="monospace">server_install</font> package.</p>
<p>One thing that server administrators will undoubtedly find missing is the ability to directly configure the network as part of the install; right now it defaults to the automatic configuration technology we call Network Auto-Magic (or NWAM).&nbsp; We do plan to extend the text installer to also provide static network configuration, so you'll be able to supply an IP address and nameservice configuration directly, rather than having to do this post-installation.</p>
<div class="zemanta-pixie">
<h2>GUI Installer<br /></h2>
<p>The GUI installer has undergone some small changes from the versions provided with OpenSolaris.&nbsp; If the last time you used it was with OpenSolaris 2009.06, the biggest difference is that it provides support for extended partitions, which provides a little more flexibility in dealing with the limitations of the x86 partitioning scheme and eases co-existence with other OS's in multi-boot configurations.&nbsp; The other change here, more subtle, is that the UI no longer separately prompts for the root password.&nbsp; Instead, the password for the root role is set to the same password as the initial user account (which is now required, where it was optional during OpenSolaris releases).&nbsp; The root password is created as expired, however, so first time you <font face="monospace">su</font> to root, you'll be prompted to change the password.&nbsp; Finally, the initial user account is no longer assigned the Primary Administrator profile to enable administrative access.&nbsp; Instead, the user account retains access to the root role, and is also given all access to <font face="monospace">sudo</font>.&nbsp; The text installer does allow independent setting of the root password at this release, but we expect to align it with the GUI in a future build.<br /><br /> </p>
<div class="zemanta-pixie"><img class="zemanta-pixie-img" src="http://img.zemanta.com/pixy.gif?x-id=c203d80a-45d5-834e-b3cd-3a71e8bc7ab8" /></div>
</div>
</div>https://blogs.oracle.com/dminer/entry/oracle_solaris_11_express_2010Oracle Solaris 11 Express 2010.11 is releasedDave Miner-Oracle 2010-11-15T08:30:47+00:002010-11-15T16:33:40+00:00Today marks the release of Oracle Solaris 11 Express 2010.11, beginning the rollout of our long-gestating successor to Solaris 10.&nbsp; The summary and links to most everything are available on the OTN <a href="http://www.oracle.com/technetwork/server-storage/solaris11/index.html">Oracle Solaris 11 Overview</a>.&nbsp; Probably the biggest thing to emphasize is that this is a supported release, not a &quot;beta&quot; or preview; see the link for the support options.&nbsp; That said, feature development continues in anticipation of a Solaris 11 release in 2011, as was outlined at OpenWorld back in September.<br /><br />For those who used the OpenSolaris distribution releases, you'll find this release quite familiar, as it's the continuing evolution of the technology we introduced in those releases: the installers from the <a href="http://hub.opensolaris.org/bin/view/Project+caiman/WebHome">Caiman</a> project, the <a href="http://hub.opensolaris.org/bin/view/Project+pkg/">IPS packaging</a> system, and all the other great things that my colleagues in Solaris engineering have been developing for the past several years in networking, storage, security and so on.&nbsp; The biggest visible differences are a different package repository, license terms, and of course Oracle branding.<br /><br />For those of you who weren't users of OpenSolaris, well, now is the time to really start getting your feet wet, evaluating Solaris 11 and planning its deployment in your environment.&nbsp; We hope you'll like it!<br />
<blockquote></blockquote><br /><br />
<div class="zemanta-pixie"><img src="http://img.zemanta.com/pixy.gif?x-id=4e7061ac-0a88-81ee-8578-552ce761dbfe" class="zemanta-pixie-img" /></div>https://blogs.oracle.com/dminer/entry/solaris_bof_s_at_lisaSolaris BOF's at LISA 09Dave Miner-Oracle 2009-10-23T12:16:08+00:002009-10-23T19:17:06+00:00<font face="sans-serif">As usual, Solaris will have a strong presence at this year's <a href="http://www.usenix.org/events/lisa09/">LISA</a> conference, November 1-6 in Baltimore.&nbsp; For the first time in a few years I'm also going to be there.&nbsp; On Tuesday night, Nov. 3, we'll be having </font>several <a href="http://www.usenix.org/events/lisa09/bofs.html">BOF sessions</a>.&nbsp; The one I'll be a part of will be a discussion of the changes coming in Solaris Next (the code name for the successor to Solaris 10 that will be based on the <a href="http://www.opensolaris.com/">OpenSolaris distribution</a>).&nbsp; Many of the most visible changes involve the <a href="http://www.opensolaris.org/os/project/caiman/">installation</a> and <a href="http://www.opensolaris.org/os/project/pkg/">packaging</a> software, hence my involvement.&nbsp; This will be a great opportunity for interactive discussion and feedback from those who can attend; I hope to see you there!<br /><a href="http://www.usenix.org/lisa09/going"> <img border="0" width="169" height="77" alt="I'm going to LISA '09" src="http://www.usenix.org/events/lisa09/art/lisa09_going.jpg" /> </a><br /><br />https://blogs.oracle.com/dminer/entry/neosug_meeting_july_24NEOSUG meeting July 24Dave Miner-Oracle 2008-07-17T01:57:52+00:002008-07-17T08:58:00+00:00<a href="http://pbgalvin.wordpress.com/">Peter</a> has just <a href="http://pbgalvin.wordpress.com/2008/07/16/fifth-neosug-meeting-special-guest-speaker/">announced</a> the next meeting of the New England OpenSolaris User Group, which is on July 24. Sorry to say that I'll not be there, as I'll be on the plane back from California that evening, but hope we get a good turnout nonetheless.<br />https://blogs.oracle.com/dminer/entry/opensolaris_the_distroOpenSolaris, the distroDave Miner-Oracle 2008-05-04T23:44:49+00:002008-05-05T14:19:59+00:00As of a little while ago, the official bits for OpenSolaris 2008.05 went live, at the distro's home site, <a href="http://www.opensolaris.com/">opensolaris.com</a>.
While it may seem odd to say, I view this day more as a beginning than
an ending (though I am more than happy to call an end to the 60+ hour
weeks that went into building it!). It's a beginning in many ways, but
I'll just say that while we've shipped an image and loaded up a pretty
good number of packages into the repository, most of the functionality
we plan to ultimately have isn't there yet, not to mention the number
of packages we want to have in the repository.<br /><br />At the moment I'm
too worn out from the weekend at the OpenSolaris Summit to even attempt
to write anything technical, as it likely wouldn't make any sense, so
I'll just keep this short and close with a big THANK YOU to everyone on
the <a href="http://www.opensolaris.org/os/project/caiman/">Caiman</a> team for all the work they've done in getting us to this
milestone. It's time to feel good about what we've done.<br /><br />Look forward to seeing lots of you at <a href="http://developers.sun.com/events/communityone/">CommunityOne</a>!https://blogs.oracle.com/dminer/entry/slides_from_indiana_at_neosugSlides from Indiana at NEOSUGDave Miner-Oracle 2007-11-02T10:18:39+00:002007-11-02T17:18:39+00:00I've posted my slides from my talk about /demo of the preview release at last night's <a href="http://www.opensolaris.org/os/project/ne-osug/">NEOSUG</a> meeting, get 'em in <a href="http://mediacast.sun.com/details.jsp?id=3880">PDF</a> or <a href="http://mediacast.sun.com/details.jsp?id=3879">ODP</a> format.&nbsp; Thanks to those who came by, hope you had good luck with the CD's we handed out!<br />https://blogs.oracle.com/dminer/entry/neosug_on_november_1NEOSUG on November 1Dave Miner-Oracle 2007-10-05T12:02:41+00:002007-10-05T19:04:41+00:00Peter's posted the announcement of the next New England OpenSolaris User Group.<br /><br /><a href="http://pbgalvin.wordpress.com/2007/10/02/fourth-neosug-meeting/">Fourth NEOSUG Meeting « Peter Baer Galvin’s Blog</a><br /><br />Should be an interesting juxtaposition of how we're both continuing to provide Solaris compatibility with the Solaris 8 Migration Assistant and at the same time exploring new territory with <a href="http://www.opensolaris.org/os/project/indiana/">Project Indiana</a>.&nbsp; I'd write more, but I need to get back to getting that Indiana preview ready so I'll actually have something to talk about ;-)&nbsp; Hope to see you there!<br />https://blogs.oracle.com/dminer/entry/sun_tech_days_in_bostonSun Tech Days in BostonDave Miner-Oracle 2007-08-09T11:15:12+00:002007-08-09T18:15:12+00:00For those of you in the Northeast US, the traveling developer conference that we call <a href="http://developers.sun.com/events/techdays/2007/US_BOS.jsp">Sun Tech Days<br /></a> will be coming to Boston next month, September 11-12.&nbsp; I'll be speaking at the <a href="http://www.opensolaris.org/os/community/advocacy/events/current_tech_days/">OpenSolaris Day</a> on the 11th, covering the "What is Solaris Nevada?" session, which will include an update on <a href="http://www.opensolaris.org/os/project/indiana/">Indiana</a>.&nbsp; These sessions are free, but you do need to register to attend.&nbsp; Hope to see you there!<br />https://blogs.oracle.com/dminer/entry/neosug_first_meeting_on_januaryNEOSUG first meeting on January 31Dave Miner-Oracle 2007-01-26T12:03:08+00:002007-01-26T20:03:08+00:00I've spammed a couple of newsgroups and mailing lists announcing this, might as well close the net by doing it to my blog, too.&nbsp; Anyway, the first meeting of the <a href="http://www.opensolaris.org/os/community/os_user_groups/ne-osug/">New England OpenSolaris User Group</a> is happening at 6:00 PM Wednesday, January 31, 2007, at our <a href="http://maps.yahoo.com/index.php#mvt=m&amp;q1=One+Network+Drive+MS+UBUR02-212%2C+Burlington+MA+01803-0902&amp;trf=0&amp;lon=-71.233292&amp;lat=42.499694&amp;mag=7">campus in Burlington, MA</a>.&nbsp; I'll be speaking, as will the esteemed <a href="http://blogs.sun.com/webmink/">Simon Phipps</a> and <a href="http://pbgalvin.wordpress.com/">Peter Baer Galvin</a>.&nbsp; Full details are at the <a href="https://www.suneventreg.com//cgi-bin/register.pl?EventID=1288">registration page</a>.&nbsp; Hope to see you there!https://blogs.oracle.com/dminer/entry/finally_solaris_on_my_homeFinally, Solaris on my home desktopDave Miner-Oracle 2005-12-23T14:54:21+00:002006-03-07T15:56:11+00:00Ouch, this blog is really looking neglected. Probably a New Year's resolution to be made there, but that'll wait for a couple weeks, right after the one about procrastination...
<p>
This entry is a bit of a celebration, in that I'm finally in a position to run Solaris on my home desktop full-time again. That will undoubtedly seem odd given where I've worked all these years, but there have been a lot of reasons, mostly having to do with hardware support for the Athlon system I built myself a couple of years back, which has a lot of scrounged parts. I have to admit, when I'm spending my own money on computer equipment, I'm a cheapskate, a result of throwing away too many systems before they'd really reached their end of life. Usually, it seems we software types bloat them into submission. Anyway...
<p>
I'd been fiddling with putting Solaris on this system for several months now when I'd had a little extra time (which hasn't been often). But I'd been stymied by the Nvidia driver not supporting its video card, a GeForce MX 440 that I got for free from <a href="http://blogs.sun.com/seb">Seb</a> . This was a non-negotiable requirement, because it has dual outputs and can thus run Nvidia's TwinView option to give me the screen real estate that I need. Today, though, I finally succeeded, with Solaris Nevada build 30 (coming to Solaris Express early in 2006), and <a href="http://www.nvidia.com">Nvidia's</a> 1.0-8178 driver. Happiness is a 2560x1024 display, at least today.
<p>
But it wasn't complete happiness, because there was still one glitch - duplex printing on my HP PSC-2400 printer. Disappointingly, the Solaris 10 docs didn't have the answer - I'll have to mention that one to Norm and Wendy. But one Google for "duplex printing foomatic" later, I had the <a href="http://www.sun.com/bigadmin/content/submitted/duplex_printing.html">answer</a>.
<p>
And with that, I think we're at the point where my Linux-developing wife will be retreating to just using her laptop. I'll still keep Ubuntu on the system, as it shows a lot of things we need to do for the Solaris desktop. I just don't need to use it regularly anymore. Evidence that we're moving, a step at a time, back towards desktop viability.https://blogs.oracle.com/dminer/entry/approachability_community_comes_aliveApproachability Community Comes Alive!Dave Miner-Oracle 2005-10-08T06:20:23+00:002005-10-08T13:39:58+00:00I'm happy to announce that the new <a href="http://www.opensolaris.org/os/community/approachability/">Approachability community</a> is now live on <a href="http://www.opensolaris.org">OpenSolaris</a>.
<p>
The initial content there is some basic background on improving the Solaris networking experience, a program we've been referring to internally as "Network Automagic". As most Solaris users will attest, networking configuration on Solaris is much too difficult. The primary reason is that many of the basic assumptions go back 15 or more years, and we haven't updated the core architecture to account for the changes that dynamic addresses, mobility, and wireless networks have brought about. My group is all about fixing that, and here's where you can start getting involved. So take a look, post your stories and suggestions, and be prepared for a lot more to come!
<p>
Technorati Tag: <a href="http://www.technorati.com/tag/OpenSolaris" rel="tag">OpenSolaris</a>
<br>
Technorati Tag: <a href="http://www.technorati.com/tag/Solaris" rel="tag">Solaris</a><br>