Post Categories

Sites

Six years ago I started working furiously on this little side project about package management for Windows. It started to grow and over time it became clear that it was going to be something important. A community flourished and there was a tremendous uptake for this little tool.

Fast forward to present, starting soon I will be focused solely on Chocolatey as the Founder of Chocolatey Software, Inc*! It's an exciting opportunity to really see where we can take this Windows software management thing!

I also could not have had the opportunity to move forward without the support of a tremendous community, who has contributed to Chocolatey's success in many ways. Your support does not go unnoticed - we will continue to make open source improvements, along with ensuring that organizations can take Chocolatey to the next level with Chocolatey for Business.

It's a bit bittersweet as I've had the opportunity work with a lot of fantastic folks at Puppet and do some really awesome things for furthering automation on Windows. In many ways Puppet has been an amazing place to work (I highly recommend it, they have the remote employee situation handled). However, an opportunity to follow my first love, Chocolatey, is a dream I won't pass up.

Not everyone gets the opportunity to follow their dreams, so when you get a chance it can be both a thrilling and scary experience! Here's to the future of Chocolatey and Windows automation!

* - For those keeping track - Chocolatey Software was formed in November 2016 as a spin off of RealDimensions Software, LLC

This is a very exciting time for Chocolatey! Over the past 5 years, there have been some amazing points in Chocolatey's history. Now we are less than 10 days from another historical moment for Chocolatey - when licensed editions become available for purchase! This is the moment when we are able to offer features that enable businesses to better manage software through Chocolatey and offer non-free features to our community! This also marks when the community (and organizations) take the next step to ensure the longevity of Chocolatey for the next 10-20 years. I started this process with a dream and a Kickstarter and now it's finally coming to fruition!

Features

Here is a list of the licensed features that will be coming in May. I really think you are going to like what we've been cooking up:

No more 404s - Alternate permanent download location for Professional customers. Read more...

Integration with existing Antivirus - Great for businesses that don't want to reach out to VirusTotal.

(Business Only) Create packages from software files/installers - Do you keep all the applications you install for your business internally somewhere? Chocolatey can automatically create packages for all the software your organization uses in under 5 minutes! - Shown as a preview in a March webinar (fast forward to 36:45)

Install Directory Switch - You no longer need to worry about the underlying directives to send to native installers to install software into special locations. You can simply pass one directory switch to Chocolatey and it will handle this for you.

Support and prioritization of bugs and features for customers.

Sold! But How Do I Buy?

While we are still getting the front end systems setup and ensuring all of the backend systems are in place and working properly, we are limiting availability to the first 500 professional licenses and 20 businesses (Note: we do not expect any issues with our payment processor). Because we are limiting availability, you must register for the Go Live Event at https://chocolatey.eventbrite.com if you are interested. It bears repeating, the links for purchase will only be sent to folks who have registered for the event, so secure your spot now!

A designer started a conversation with us in December 2014 and we've recently come to a decision point on Chocolatey - a new logo (and soon a new website)! A special thanks goes out to Julian Krispel-Samsel!

Almost immediately folks started asking what this means for Chocolatey. It’s a great question. Here’s the low down. This is fantastic for Chocolatey! You now have a fantastic way to get Unix apps and utilities with dpkg/apt in addition to great Windows apps and software with choco. More developers are going to be using the terminal to do things. It means more users of both apt and choco. More productivity for Windows users and developers. Think about that for a second. On no other platform will you have this ability. It’s an exciting time to be in Windows!

What you can expect to see is more collaboration between choco and apt if they can communicate. Just like you can work with choco install -–source windowsfeatures (back in the latest 0.9.10 betas!), expect to see choco install rsync -–source apt. https://github.com/chocolatey/choco/issues/678

Coming up soon you are going to see what’s coming in the next version of Chocolatey and why it is going to amaze you as another big leap in package management for Windows!

Chocolatey turned 5 years old recently! I committed the first lines of Chocolatey code on March 22, 2011. At that time I never imagined that Chocolatey would grow into a flourishing community and a tool that is widely used by individuals and organizations to help automate the wild world of Windows software. It's come a long way since I first showed off early versions of Chocolatey to some friends for feedback. Over the last 2 years things have really taken off!

The number of downloads has really increased year over year!

Note: While not a completely accurate representation of usage and popularity, the number of downloads gives a pretty good context. Going up by 7 million and then by almost 30 million downloads in one year really shows a trend.

Note: The Chocolatey package has about 1,000 downloads per hour. I shut off the statistics for the install script back in October 2015 due to the extreme load on the site, so the number of Chocolatey package downloads is missing some of the statistics.

History

Let’s take a little stroll through some of the interesting parts of Chocolatey’s history. The history of Chocolatey really starts when I joined the Nubular (Nu) team in summer 2010.

January 2016 – Moderation backlog is reduced to near zero and is now manageable thanks to the automation.

February 1, 2016 – First Professional Licenses of Chocolatey are shipped to Kickstarters.

March 21, 2016 – CloudFlare caching tweaks introduced on the community repository to handle the increased pressure that will come from tab completion for package names.

March 23, 2016 – Virus scan results shown on the community repository for packages.

This doesn’t represent everything that has happened. I tried to list out and attribute everything I could find and remember. There have been so many amazing package maintainers over the years, there are too many of you to possibly list. You know who you are. You have made the community what it is today and have been instrumental in shaping enhancements in Chocolatey.

Looking to the Future

The community has been amazing in helping Chocolatey grow and showing that there is a need that it fills. Package maintainers have put in countless and sometimes thankless hours to ensure community growth and consumers have really found the framework useful! Thank you so much! The next version of Chocolatey is coming and it is going to be amazing. Here's to the next 5 years, may we change the world of Windows forever!

The cleaner - provides reminders and closes packages under review when they have gone stale.

The Cleanup Service

We've created a cleanup service, known as the cleaner that went into production recently.

It looks for packages under review that have gone stale - defined as 20 or more days since last review and no progress

Sends a notice/reminder that the package is waiting for the maintainer to fix something and that if another 15 days goes by with no progress, the package will automatically be rejected.

15 days later if no progress is made, it automatically rejects packages with a nice message about how to pick things back up later when the maintainer is ready.

Current Backlog

We've found that with all of this automation in place, the moderation backlog was quickly reduced and will continue to be manageable. A visual comparison:

December 18, 2015 – 1630 packages ready for a moderator

January 16, 2016 – 7 packages ready for a maintainer

Note the improvements all around! The most important numbers to key in on are the first 3, they represent a waiting for reviewer to do something status. With the validator and verifier in place, moderation is much faster and more accurate, and the validator has increased package quality all around with its review! The waiting for maintainer (927 in the picture above) represents the bulk of the total number of packages under moderation currently. These are packages that require an action on the part of the maintainer to actively move the package to approved. This is also where the clean up service comes in. The cleaner sent 800+ reminders two days ago. If there is no response by early February on those packages, the waiting for maintainer status will drop significantly as those packages will automatically be rejected. Some of those packages have been waiting for maintainer action for over a year and are likely abandoned. If you are a maintainer and you have not been getting emails from the site, you should log in now and make sure your email address is receiving emails and that the messages are not going to your spam folder. A rejected package version is reversible, the moderators can put it back to submitted at any time when a maintainer is ready to work on moving the package towards approval again.

Statistics

This is where it really starts to get exciting. Some statistics:

Around 30 minutes after a package is submitted the validator runs.

Within 1-2 hours the verifier has finished testing the package and posts results.

Typical human review wait time after a package is deemed good is less than a day now.

We're starting to build statistics on average time to approval for packages that go through moderation that will be visible on the site. Running some statistics by hand, we've approved 236 packages that have been created since January 1st, the average final good package (meaning that it was the last time someone submitted fixes to the package) to approval time has been 15 hours. There are some packages that drove that up due to fixing some things in our verifier and rerunning the tests. If I change to only looking at packages since those fixes have went in on the 10th, that is 104 packages with an average approval within 7 hours!

tl;dr: Everything on https://chocolatey.org/notice is coming to fruition! We've automatically tested over 6,500 packages, a validator service is coming up now to check quality and the unreviewed backlog has been reduced by 1,000 packages! We sincerely hope that the current maintainers who have been waiting weeks and months to get something reviewed can be understanding that we’ve dug ourselves into a moderation mess and are currently finding our way out of this situation.

We’ve added a few things to Chocolatey.org (the community feed) to help speed up review times for package maintainers. A little over a year ago we introduced moderation for all new package versions (besides trusted packages) and from the user perspective it has been a fantastic addition. The usage has went up by over 20 million packages installed in one year versus just 5 million the 3 years before it! It’s been an overwhelming response for the user community. Let me say that again for effect: Chocolatey’s usage of community packages has increased 400% in one year over the prior three years combined!

But let’s be honest, we’ve nearly failed in another area. Keeping the moderation backlog low. We introduced moderation as a security measure for Chocolatey’s community feed because it was necessary, but we introduced it too early. We didn’t have the infrastructure automation in place to handle the sheer load of packages that were suddenly thrown at us. And once we put moderation in place, more folks wanted to use Chocolatey so it suddenly became much more popular. And because we have automation surrounding updating and pushing packages (namely automatic packages), we had some folks who would submit 50+ packages at a time. With one particular maintainer submitting 200 packages automatically, and a review of each of them taking somewhere between 2-10 minutes, you don’t have to be a detective to understand how this is going to become a consternation. And from the backlog you can see it really hasn’t worked out well.

The most important number to understand here is the number in the submitted (underlined). This is the number of packages where a moderator has not yet looked at a package. A goal is to keep this well under 100. We want that time from a high quality package getting submitted to approved within 1-2 days.

Moderation has up until recently been a very manual process. Sometimes depending on which moderator that looked at your package determined whether it was going to be held in review for various reasons. We’ve added moderators and we’ve added more guidance around moderation to help bring a more structured review process. But it’s not enough.

Some of you may not know this, but our moderators are volunteers and we currently lack full-time employees to help fix many of the underlying issues. Even considering that we’ve also needed to work towards Kickstarter delivery and the Chocolatey rewrite (making choco better for the long term), it’s still not the greatest news to know that it has taken a long time to fix moderation, but hopefully it brings some understanding. Our goal is to eventually bring on full-time employees but we are not there yet. The Kickstarter was a start, but it was just that. A kick start. A few members of the core team who are also moderators have focused on ensuring the Kickstarter turns into a model that can ensure the longevity of Chocolatey. It may have felt that we have been ignoring the needs of the community, but that has not been our intention at all. It’s just been really busy and we needed to address multiple areas surrounding Chocolatey with a small number of volunteers.

So What Have We Fixed?

All moderation review communication is done on the package page. Now all review is done on the website, which means that there is no email back and forth (the older process) and what looks like one-sided communication on the site. This is a significant improvement.

Package review logging. Now you can see right from the discussion when and who submits package, when statuses change and where the conversation is.

More moderators. A question that comes up quite a bit surrounds the number of moderators that we have and adding more. We have added more moderators. We are up to 12 moderators for the site. Moderators are chosen based on building trust, usually through being extremely familiar with Chocolatey packaging and what is expected of approved packages. Learning what is expected usually comes through having your own packages approved and having a few packages. We’ve written most of this up at https://github.com/chocolatey/choco/wiki/Moderation.

Maintainers can self-reject packages that no longer apply. Say your package has a download url for the software that is always the same. You have some older package versions that could take advantage of being purged out of the queue since they are no longer applicable.

The package validation service (the validator). The validator checks the quality of a package based on requirements, guidelines and suggestions for creating packages for Chocolatey’s community feed. Many of the validation items will automatically roll back into choco and will be displayed when packaging a package. We like to think of the validator as unit testing. It is validating that everything is as it should be and meets the minimum requirements for a package on the community feed.

The package verifier service (the verifier). The verifier checks the correctness (that the package actually works), that it installs and uninstalls correctly, has the right dependencies to ensure it is installed properly and can be installed silently. The verifier runs against both submitted packages and existing packages (checking every two weeks that a package can still install and sending notice when it fails). We like to think of the verifier as integration testing. It’s testing all the parts and ensuring everything is good. On the site, you can see the current status of a package based on a little colored ball next to the title. If the ball is green or red, the ball is a link to the results (only on the package page, not in the list screen).

Green means good. The ball is a link to the results

Orange if still pending verification (has not yet run).

Red means it failed verification for some reason. The ball is a link to the results.

Grey means unknown or excluded from verification (if excluded, a reason will be listed on the package page).

Coming Soon - Moderators will be automatically be assigned to backlog items. Once a package passes both validation and verification, a moderator is automatically assigned to review the package. Once the backlog is in a manageable state, this will be added.

What About Maintainer Drift?

Many maintainers come in to help out at different times in their lives and they do it nearly always as volunteers. Sometimes it is the tools they are using at the current time and sometimes it has to do with where they work. Over time folks’ preferences/workplaces change and so maintainers drift away from keeping packages up to date because they have no internal incentive to continue to maintain those packages. It’s a natural human response. I've been thinking about ways to reduce maintainer drift for the last three years and I keep coming back to the idea that consumers of those packages could come along and provide a one time or weekly tip to the maintainer(s) as a thank you for keeping package(s) updated. We are talking to Gratipay now - https://github.com/gratipay/inside.gratipay.com/issues/441 This, in addition to a reputation system, I feel will go a long way to help reduce maintainer drift.

Final Thoughts

Package moderation review time is down to mere seconds as opposed to minutes like before. This will allow a moderator to review and approve package versions much more quickly and will reduce our backlog and keep it lower.

It’s already working! The number in the unreviewed backlog are down by 1,000 from the month prior. This is because a moderator doesn’t have to wait until a proper time when they can have a machine up and ready for testing and in the right state. Now packages can be reviewed faster. This is only with the verifier in place, sheerly testing package installs. The validator expects to cut that down to near seconds of review time. The total number of packages in the moderation backlog have also been reduced, but honestly I only usually pay attention to the unreviewed backlog number as it is the most important metric for me.

We sincerely hope that the current maintainers who have been waiting weeks and months to get something reviewed can be understanding that we’ve dug ourselves into a moderation mess and are currently finding our way out of this situation. We may have some required findings and will ask for those things to be fixed, but for anything that doesn't have required findings, we will approve them as we get to them.

“Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.” – Howard H. Aiken

Look around today. There is so much that you can do on Windows with respect to automation that just wasn’t possible a few short years ago. It’s hard to see what has changed because our memories are sometimes so short about how it used to be, so let’s go back about 4 years ago to 2010. PowerShell was still young, there was no Chocolatey, and things like Puppet and Chef didn’t work on Windows yet.

Folks were leaving Windows left and right once they got a taste of how easy automation was in other OS platforms. Well versed folks. Loud folks. Folks that were at the top of their game heading out of Windows. Many others have considered it. You’ve all heard the “leaving .NET stories” from some of the best developers on the .NET platform. But what you may not have realized is that these folks were not just leaving .NET, they were leaving Windows entirely. Some of this was in part to limitations they were finding that just were not there in other OSes. What you are also missing are all the folks that were silently leaving. For every one person speaking out about it, there were many more of the silent losses. The system admins, fed up with GUIs and lack of automation, leaving for greener pastures. The developers who didn’t blog leaving the platform.

But a change has occurred more recently that has slowed that process. I believe it is better tools and automation of the Windows platform. Some people have shown such a passion that they’ve saved Windows as a platform for future generations.

So What Saved Windows?

PowerShell – Arguably this could be seen as the catalyst that started it all. It came out in 2006 and while v1 was somewhat limited, v2 (Oct 2009) added huge improvements, including performance. PowerShell is prevalent now, but it had humble beginnings. When Jeffrey Snover saw a need for better automation in Windows, no one understood what he was trying to do. Most folks at Microsoft kept asking, why do you need that? But Jeffrey had such a passion for what was needed that he took a demotion to make it happen. And we are thankful for that because it shaped the face of Windows automation for all. Jeffrey’s passion brought us PowerShell, and it is continuing to bring us more things that have come out of his original Monad Manifesto from 2002.

Chocolatey – In 2011 Chocolatey furthered the automation story of Windows with package management, something that other platforms have enjoyed for years. Rob Reynolds’ goals for Chocolatey in the beginning were simply to solve a need but it has since grown into so much more and is now making improvements to become a true package manager. It wasn’t the first approach to package management on Windows and it is certainly not the last. But it did many things right, it didn’t try to achieve lofty goals. It started working at the point of the native installers and official distribution points with a simple approach to packaging and very good helpers to achieve many abilities. When Rob first started working on it, most of his longtime technical friends questioned the relevance of it. Rob did not stop because he had a vision, a passion for making things happen. As his vision has been realized by many he is about to change the face of package management on Windows forever.

Puppet (and other CM tools) – In 2011Puppet started working on Windows thanks to Josh Cooper. He single-handedly brought Puppet’s excellent desired state configuration management to Windows (Chef also brought Windows support in 2011). Josh saw a need and convinced folks to try an experiment. That experiment has grown and has brought the last bits of what was needed to save Windows as a platform. His passion for bringing Puppet to Windows has grown into so much more than what it originally started out to do. And now it is arguably the best CM tool for any platform, as the Puppet Labs CEO stated at PuppetConf 2014, Puppet is becoming the lingua franca of infrastructure configuration.

The Effects of Passion

All of this passion for automation has really changed Microsoft. They have adopted automation as a strategy. They are moving to a model of openness, recently announcing that the entire .NET platform is going to be Open Source. They are getting behind Chocolatey with OneGet and getting it built into Windows. They announced PowerShell DSC last year and have made huge improvements in it since then. From where we are sitting, it appears Microsoft now gets it. The effects of passion have really turned the company around and has saved Windows. Windows is becoming the platform we all hoped it would be, it’s really bringing many folks to see it as a true platform for automation and that makes Windows a formidable platform for the foreseeable future.

Well just after three years of having https://chocolatey.org, we’ve finally implemented package moderation. It’s actually quite a huge step forward. This means that when packages are submitted, they will be reviewed and signed off by a moderator before they are allowed to show up and be used by the general public.

What This Means for You Package Consumers

Higher quality packages - we are working to ensure by the time a package is live, moderators have given feedback to maintainers and fixes have been added.

More appropriate packages - packages that are not really relevant to Chocolatey's community feed will not be approved.

More trust - packages are now reviewed for safety and completeness by a small set of trusted moderators before they are live.

Reviewing existing packages - All pre-existing packages will be reviewed and duplicates will be phased out.

Not Reviewed Warning - Packages that are pre-existing that have not been reviewed will have a warning on chocolatey.org. Since this is considered temporary while we are working through moderation of older packages, we didn't see a need to add a switch to existing choco.

Existing packages that have not been moderated yet will have a warning posted on the package page that looks like

Packages that have been moderated will have a nice message on the package page that looks like

If the package is rejected, the maintainer will see a message, but no one else will see or be able to install the package.

You should also keep the following in mind:

We are not going to moderate prerelease versions of a package as they are not on the stable feed.

We are likely only moderating the current version of a package. If you feel older versions should be reviewed, please let us know through contact site admins on the package page.

Chocolatey is not going to give you any indication of approved. We expect this to be temporary while we review all existing packages, so we didn’t see much benefit to the amount of work involved to bring it to the choco client in its current implementation.

What This Means for Package Maintainers

Re-push same version - While a package is under review you can continually push up that same version with fixes

Email - Expect email communication for moderation - if your email is out of date or you never receive email from chocolatey, ensure it is not going to the spam folder. We will give up to two weeks before we reject a package for non-responsive maintainers. It's likely we will then review every version of that package as well.

Learning about new features - during moderation you may learn about new things you haven't known before.

Pre-existing - We are going to be very generous for pre-existing packages. We will start communicating things that will need to be corrected the first time we accept a package, the second update will need to have those items corrected.

Push gives no indication of moderation - Choco vCurrent gives no indication that a package went under review. We are going to put out a point release with that message and a couple of small fixes.

I’m really excited to tell you about The Chocolatey Experience! We are taking Chocolatey to the next level and ensuring the longevity of the platform. But we can’t get there without your help! Please help me support Chocolatey and all of the improvements we need to make!

Chocolatey has some big changes coming in the next few months, so we’ve started a newsletter to keep everyone informed of what’s coming. The folks who are signed up for the newsletter will hear about the latest and greatest changes coming for Chocolatey first, plus they will know when the Kickstarter (Yes! Big changes are coming!) kicks off before anyone else. Sign up for the newsletter now to learn about all the exciting things coming down the pipe for Chocolatey!

Run the 'puppet resource user' command again. Note the user we created is there!

Let’s clean up after ourselves and remove that user we just created:

puppet apply -e "user {'bobbytables_123': ensure => absent, }"

Relevant output should look like:

Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed

Run the 'puppet resource user' command one last time. Note we just removed a user!

Conclusion

You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet.

Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

Puppet was one of the first configuration management (CM) tools to support Windows, way back in 2011. It has the heaviest investment on Windows infrastructure with 1/3 of the platform client development staff being Windows folks. It appears that Microsoft believed an end state configuration tool like Puppet was the way forward, so much so that they cloned Puppet’s DSL (domain-specific language) in many ways and are calling it PowerShell DSC.

Puppet Labs is pushing the envelope on Windows. Here are several things to note:

It can be overwhelming learning a new tool like Puppet at first, but Puppet Labs has some resources to help you on that path. Take a look at the Learning VM, which has a quest-based learning tool. For real-time questions, feel free to drop onto #puppet on freenode.net (yes, some folks still use IRC) with questions, and #puppet-dev with thoughts/feedback on the language itself. You can subscribe to puppet-users / puppet-dev mailing lists. There is also ask.puppetlabs.com for questions and Server Fault if you want to go to a Stack Exchange site. There are books written on learning Puppet. There are even Puppet User Groups (PUGs) and other community resources!

Puppet does take some time to learn, but with anything you need to learn, you need to weigh the benefits versus the ramp up time. I learned NHibernate once, it had a very high ramp time back then but was the only game on the street. Puppet’s ramp up time is considerably less than that. The advantage is that you are learning a DSL, and it can apply to multiple platforms (Linux, Windows, OS X, etc.) with the same Puppet resource constructs.

As you learn Puppet you may wonder why it has a DSL instead of just leveraging the language of Ruby (or maybe this is one of those things that keeps you up wondering at night). I like the DSL over a small layer on top of Ruby. It allows the Puppet language to be portable and go more places. It makes you think about the end state of what you want to achieve in a declarative sense instead of in an imperative sense.

You may also find that right now Puppet doesn’t run manifests (scripts) in order of the way resources are specified. This is the number one learning point for most folks. As a long time consternation of some folks about Puppet, manifest ordering was not possible in the past. In fact it might be why some other CMs exist! As of 3.3.0, Puppet can do manifest ordering, and it will be the default in Puppet 4. http://puppetlabs.com/blog/introducing-manifest-ordered-resources

You may have caught earlier that I mentioned PowerShell DSC. But what about DSC? Shouldn’t that be what Windows users want to choose? Other CMs are integrating with DSC, will Puppet follow suit and integrate with DSC? The biggest concern that I have with DSC is it’s lack of visibility in fine-grained reporting of changes (which Puppet has). The other is that it is a very young Microsoft product (pre version 3, you know what they say :) ). I tried getting it working in December and ran into some issues. I’m hoping that newer releases are there that actually work, it does have some promising capabilities, it just doesn’t quite come up to the standard of something that should be used in production. In contrast Puppet is almost a ten year old language with an active community! It’s very stable, and when trusting your business to configuration management, you want something that has been around awhile and has been proven. Give DSC another couple of releases and you might see more folks integrating with it. That said there may be a future with DSC integration. Portability and fine-grained reporting of configuration changes are reasons to take a closer look at Puppet on Windows.

Access Control Lists and permissions can get inherently complex. We didn’t want to sacrifice a sufficiently advanced administrator/developer/etc from being able to get to advanced scenarios with ACLs with Puppet’s ACL module. With the ACL module (soon to be) out in the wild, it may be helpful to explain one of the significantly advanced features of the ACL module: mask specific rights. I am going to interchangeably use the term “acl” to mean the module during the rest of this post (and not the Access control list or discretionary access control list).

Say you need very granular rights, not just RX (read, execute), but also to read and write attributes. You get read attributes (FILE_READ_ATTRIBUTES) with read (FILE_GENERIC_READ), see http://msdn.microsoft.com/en-us/library/windows/desktop/aa364399(v=vs.85).aspx. ACL provides you with the ability to specify ‘full’,’modify’,’write’,’read’,’execute’ or ‘mask_specific’. Mask specific is when you can’t get the specific rights you need for an identity (trustee, group, etc.) and need to get more specific.

Note specifically that “rights=>[‘mask_specific’]” also comes with a mask integer specified as a string e.g. “mask => ‘1180032’”. Now where did that number come from? In this specific case you see it is RA,S,WA,Rc (Read Attributes, Synchronize, Write Attributes, Read Control). Let’s take a look at http://msdn.microsoft.com/en-us/library/aa394063(v=vs.85).aspx to see the Access Mask values (integer and hex).

SYNCHRONIZE
1048576 (0x100000)

If we look here, 1048576 is the one we want. Let’s whip out our calculators. You knew that math in high school and college was going to be put to good use, right? Okay, calculators out, let’s add those numbers up.

S = 1048576
Rc = 131072
RA = 128
WA = 256
-------------
1180032

That’s the same as the number we have above, so we are good. You know how to make mask_specific happen with the acl module should you ever need to.

Understanding Advanced Permissions

Oh, wait. I should explain a little more advanced scenario. RX, WA – like we started to talk about above. How do you get to that number, where is FILE_GENERIC_READ? Back to http://msdn.microsoft.com/en-us/library/windows/desktop/aa364399(v=vs.85).aspx, we can see that it includes FILE_READ_ATTRIBUTES, FILE_READ_DATA, FILE_READ_EA, STANDARD_RIGHTS_READ, and SYNCHRONIZE. FILE_GENERIC_EXECUTE contains FILE_EXECUTE, FILE_READ_ATTRIBUTES, STANDARD_RIGHTS_EXECUTE, and SYNCHRONIZE. Notice the overlap there? Each one of those flags only get added ONCE. This is important. If you are following along and looking, you have noticed STANDARD_RIGHTS_READ and STANDARD_RIGHTS_EXECUTE are not listed on the page with the rights. Where did those two come from? Take a look at http://msdn.microsoft.com/en-us/library/windows/desktop/aa374892(v=vs.85).aspx down in the C++ section. See if you notice anything? Wait, what?

STANDARD_RIGHTS_READ, STANDARD_RIGHTS_EXECUTE, and STANDARD_RIGHTS_WRITE are all synonyms for READ_CONTROL. What? Why not just call it read control? I don’t know, I’m not the guy that wrote the Access Masks. Anyway, now we know what we have so let’s get our calculators ready again.

That doesn’t quite work out to what we were thinking of ‘1180073’. Did we forget something? Yes, we got a little wrapped up in just getting RX sorted out that we forgot about WA, which adds another 256 to the number.

Parting Thoughts

While the ACL module has a simple interface you can definitely see that it packs some power with it. Having this kind of power is really helpful when you need to get fine-grained with your permissions.

This is not something one would normally do, but this is here for future reference for me.

First of all ensure puppet, facter and hiera source codes are all checked out from git and have the same top level directory.

Then you take the environment.bat file that is shipped with the puppet installer (in the bin directory), copy it somewhere that you have in the PATH and you edit the first two lines to change the PL_BASEDIR to your top level directory for all of those previous items.

SET PL_BASEDIR=C:\code\puppetlabs
REM Avoid the nasty \..\ littering the paths.
::COMMENT THIS LINE SET PL_BASEDIR=%PL_BASEDIR:\bin\..=%

Then copy the puppet.bat file over to the same directory as your modified environment.bat file and you are money.

I recently attended PuppetConf 2013 (the 3rd annual event) and all I can say coming away from that is wow. It was an amazing event with quite a few amazing speakers and sessions out there. There were over 100 speakers and more than 1200 attendees. And we had live streaming for quite a few sessions and keynotes that had a huge attendance (I don’t remember the number off the top of my head). With seven tracks going at a time, not including demos or hands on labs, it was quite an event.

The venue was awesome (San Francisco at the Fairmont Hotel) and I wished that I had a little more time outside of the conference to go exploring. Being there as an attendee, speaker, employee, and volunteer, I saw all sides of the conference. Everything was well prepared and I saw no hiccups from any side. Walking around at some of the events I could hear a buzz in the air about Windows and I happened to overhear a few folks mention the word chocolatey, which was definitely cool considering the majority of folks that are at PuppetConf are mainly Linux with some mixing of environments. I’m hoping to see that start to tip next year.

There were 4 talks on Windows and I was able to make it to almost all of them (5 talks if you consider my hands on lab a talk). Only two of those were given by puppets, so it was nice to see some talks considering there were none last year (I need to verify this).

The hands on lab did not go so well. Apologies to the attendees of the lab, but there was an issue with the virtual machine that I had provided. It was corrupted somewhere between copying it from my box to all of the USB sticks that we gave to lab attendees. Since it was only a 40 minute lab, we had to switch to a quick demo.

Fear holds us back from many things. A little fear is healthy, but don’t let it overwhelm you into missing opportunities.

In every career there is a moment when you can either step forward and define yourself, or sit down and regret it later. Why do we hold back: is it fear, constraints, family concerns, or that we simply can't do it?

I think in many cases it comes to the unknown, and we are good at fearing the unknown. Some people hold back because they are fearful of what they don’t know. Some hold back because they are fearful of learning new things. Some hold back simply because to take on a new challenge it means they have to give something else up. The phrase sometimes used is “It’s the devil you know versus the one you don’t.” That fear sometimes allows us to miss great opportunities.

In many people’s case it is the opportunity to go into business for yourself, to start something that never existed. Most hold back hear for a fear of failing. We’ve all heard the phrase “What would you do if you knew you couldn’t fail?”, which is intended to get people to think about the opportunities they might create. A better term I heard recently on the Ruby Rogues podcast was “What would be worth doing even if you knew you were going to fail?” I think that wording suits the intent better. If you knew (or thought) going in that you were going to fail and you didn’t care, it would open you up to the possibility of paying more attention to the journey and not the outcome.

In my case it is a fear of acceptance. I am fearful that I may not learn what I need to learn or may not do a good enough job to be accepted. At the same time that fear drives me and makes me want to leap forward. Some folks would define this as “The Flinch”. I’m learning Ruby and Puppet right now. I have limited experience with both, limited to the degree it scares me some that I don’t know much about either. Okay, it scares me quite a bit!

Some people’s defining moment might be going to work for Microsoft. All of you who know me know that I am in love with automation, from low-tech to high-tech automation. So for me, my “mecca” is a little different in that regard.

Awhile back I sat down and defined where I wanted my career to go and it had to do more with DevOps, defined as applying developer practices to system administration operations (I could not find this definition when I searched). It’s an area that interests me and why I really want to expand chocolatey into something more awesome. I want to see Windows be as automatable and awesome as other operating systems that are out there.

Back to the career-defining moment. Sometimes these moments only come once in a lifetime. The key is to recognize when you are in one of these moments and step back to evaluate it before choosing to dive in head first. So I am about to embark on what I define as one of these “moments.” On July 1st I will be joining Puppet Labs and working to help make the Windows automation experience rock solid! I’m both scared and excited about the opportunity!

Chocolatey has reached a milestone at 1K unique stable packages! When I started chocolatey a little over two years ago I didn't know there would be such a tremendous community uptake. I am blessed that you have found value in chocolatey and have contributed code, packages, bugs and ideas to making chocolatey better.

To celebrate this we should look at who contributed the package that put us over the top. It was Justin Dearing with SqlKerberosConfigMgr (http://chocolatey.org/packages/SqlKerberosConfigMgr). And I'm giving Justin a $50 gift card for Amazon as a small token of my appreciation. It's not much but we appreciate the contributions! This was unannounced because we want to focus on quality, not quantity.

Now, while this is a significant milestone, we are not very far in the bigger scheme of offerings for Windows. There is no hurry to get there, we prefer quality packages over quantity of packages. We will eventually grow much bigger and as we add additional sources, it increases the amount of packages we can offer.

Thanks so much to all of you for all of your work, we wouldn't be where we are today without the community!

I updated three packages this morning. I didn’t even notice until the tweets came in from @chocolateynuget.

How is this possible? It’s simple. I love automation. I built chocolatey to take advantage of automation. So it would make sense that we could automate checking for package updates and publishing those updated packages. These are known as automatic packages. Automatic packages are what set Chocolatey apart from other package managers and I daresay could make chocolatey one of the most up-to-date package manager on Windows.

Automatic Packages You Say?

You’ve followed the instructions for creating a Github (or really any source control) repository with your packages. All you need to do now is to introduce two new utilities to your personal library, Ketarin and Chocolatey Package Updater (chocopkgup for short).

Ketarin

Ketarin is a small application which automatically updates setup packages. As opposed to other tools, Ketarin is not meant to keep your system up-to-date, but rather maintain a compilation of all important setup packages which can be burned to disc or put on a USB stick.

There are some good articles out there that talk about how to createjobs with Ketarin so I am not going to go into that.

Ketarin does a fantastic job of checking sites for updates and has hooks to give it custom command before and after it has downloaded the latest version of an app/tool.

Chocolatey Package Updater

Chocolatey Package Updater aka chocopkgup takes the information given out from Ketarin about a tool/app update and translates it into a chocolatey package that it builds and pushes to chocolatey.org. It does this so you don't even have to think about updating a package or keeping it up to date. It just happens. Automatically, in the background, and even faster than you could make it happen. It's almost as if you were the application/tool author.

How To

Prerequisites And Setup:

Optional (strongly recommended) - Ensure you are using a source control repository and file system for keeping packages. A good example is here.

Optional (strongly recommended) - Make sure you have installed the chocolatey package templates. If you’ve installed the chocolatey templates (ReadMe has instructions), then all you need to do is take a look at the chocolateyauto and chocolateyauto3. You will note this looks almost exactly like the regular chocolatey template, except this has some specially named token values.

#Items that could be replaced based on what you call chocopkgup.exe with
#{{PackageName}} - Package Name (should be same as nuspec file and folder) |/p
#{{PackageVersion}} - The updated version | /v
#{{DownloadUrl}} - The url for the native file | /u
#{{PackageFilePath}} - Downloaded file if including it in package | /pp
#{{PackageGuid}} - This will be used later | /pg
#{{DownloadUrlx64}} - The 64bit url for the native file | /u64

These are the tokens that chocopkgup will replace when it generates an instance of a package.

Note the commented out /disablepush. This is so you can create a few packages and test that everything is working well before actually pushing those packages up to chocolatey. You may want to add that switch to the main command above it.

This gets Ketarin all set up with a global command for all packages we create. If you want to use Ketarin outside of chocolatey, all you need to do is remove the global setting for Before updating an application and instead apply it to every job that pertains to chocolatey update.

Create an Automatic Package:

Preferably you are taking an existing package that you have tested and converting it to an automatic package.

Open Ketarin. Choose File –> Import…

Choose the template you just saved earlier (KetarinChocolateyTemplate.xml).

Answer the questions. This will create a new job for Ketarin to check.

One important thing to keep in mind is that the name of the Application name needs to match the name of the package folder exactly.

Right click on that new job and select Edit. Take a look at the following:

Set the URL appropriately. I would shy away from FileHippo for now, the URL has been known to change and if you upload that as the download url in a chocolatey packages, it won’t work very well.

Click on Variables on the right of URL.

On the left side you should see a variable for version and one for url64. Click on version.

In the contents itself, highlight enough good information before a version to be able to select it uniquely during updates (but not so much it doesn’t work every time as the page changes). Click on Use selection as start.

Now observe that it didn’t jump back too far.

Do the same with the ending part, keeping in mind that this side doesn’t need to be too much because it is found AFTER the start. Once selected click on Use selection as end.

It should look somewhat similar to have is presented in the picture above.

If you have a 64bit Url you want to get, do the same for the url64 variable.

When all of this is good, click OK.

Click OK again.

Testing Ketarin/ChocoPkgUp:

We need to get a good idea of whether this will work or not.

We’ve set /disablepush in Ketarin global so that it only goes as far as creating packages.

Navigate to C:\ProgramData\chocolateypackageupdater.

Open Ketarin, find your job, and right click Update. If everything is set good, in moments you will have a chocolatey package in the chocopkgup folder.

Inspect the resulting chocolatey package(s) for any issues.

You should also test the scheduled task works appropriately.

Troubleshooting/Notes

Ketarin comes with a logging facility so you can see what it is doing. It’s under View –> Show Log.

In the top level folder for chocopkgup (in program data), we log what we receive from Ketarin as well and the process of putting together a package.

The name of the application in ketarin matches exactly that of the folder that is in the automatic packages folder.

Every once in awhile you want to look in Ketarin to see what jobs might be failing. Then figure out why.

Every once in awhile you will want to inspect the chocopkgupfolder to see if there are any packages that did not make it up for some reason or another and then upload them.

Conclusion

Automatic chocolatey packages are a great way to grow the number of packages you maintain without any significant jump in maintenance cost by you. I’ve been working with and using automatic packages for over six months. Is it perfect? No, it has issues from time to time (getting a good version read or actually publishing the packages in some rare cases). But it works pretty well. Over the coming months more features will be added to chocopkgup, such as been able to run its own PowerShell script (for downloading components to include in the package, etc) that would not end up in the final chocolatey package.

With full automation instead of having packages that are out of date or no longer valid, you run the small chance that something changed in the install script or something no longer works. The chances of this are much, much lower than having packages that are out of date or no longer valid.

It takes just a few minutes longer when creating packages to convert them to automatic packages but well worth it when you see that you are keeping applications and tools up to date on chocolatey without any additional effort on your part. Automatic packages are awesome!

Recently I mentioned this.Log. Given the amount of folks that were interested in this.Log, I decided to pull this source out and make a NuGet package (well, several packages).

Source

The source is now located at https://github.com/ferventcoder/this.log. Please feel free to send pull requests (with tests of course). When you clone it, if you open visual studio prior to running build.bat, you will notice build errors. Don’t send me a pull request fixing this, I want it to work the way it does now. Use build.bat appropriately.

To try to cut down on the version number being listed everywhere, I created a SharedAssembly.cs (and a SharedAssembly.vb for the VB.NET samples). That helped, but it didn’t solve the problem where it was in the nuspecs as dependencies. So I took it a step further and created a file named VERSION. When you run the build, it updates all the files that contain version information. Having one place to handle the version is nice.

NuGet

When moving this.Log to a NuGet package (or in this case 9 NuGet packages), I was able to play with some features of NuGet I had not previously, symbol servers and packing a csproj. With packing a csproj, I was able to quickly (well mostly) set up the build to package up every project with NuGet packages.

NOTE: If you’ve installed any of these prior to this post, you will want to uninstall and reinstall them (there was an particular issue with the version on the Rhino Mocks version). I’ve fixed and updated quite a bit on them from version 0.0.1.0 to 0.0.2.0.

Performance

Performance testing with log4net showed this only has an overhead of 42 ticks tested over 100,000 iterations. That’s a pretty good start given that it has a reflection hit on every call.

One of my favorite creations over the past year has been this.Log(). It works everywhere including static methods and in razor views. Everything about how to create it and set it up is in this gist.

How it looks

public class SomeClass {
public void SomeMethod() {
this.Log().Info(() => "Here is a log message with params which can be in Razor Views as well: '{0}'".FormatWith(typeof(SomeClass).Name));
this.Log().Debug("I don't have to be delayed execution or have parameters either");
}
public static void StaticMethod() {
"SomeClass".Log().Error("This is crazy, right?!");
}
}

Why It’s Awesome

It does no logging if you don’t have a logging engine set up.

It works everywhere in your code base (where you can write C#). This means in your razor views as well!

It uses deferred execution, which means you don’t have to mock it to use it with testing (your tests won’t fail on logging lines).

You can mock it easily and use that as a means of testing.

You have no references to your actual logging engine anywhere in your codebase, so swapping it out (or upgrading) becomes a localized event to one class where you provide the adapter.

Some Internals

This uses the awesome static logging gateway that JP Boodhoo showed me a long time ago at a developer bootcamp, except it takes the concept further. One thing that always bothered me about the static logging gateway is that it would construct an object EVERY time you called the logger if you were using anything but log4net or NLog. Internally it likely continued to reuse the same object, but at the codebase level it appeared as that was not so.

You can see I’m using a concurrent dictionary which really speeds up the operation of going and getting a logger. I get the initial performance hit the first time I add the object, but from there it’s really fast. I do take a hit with a reflection call every time, but this is acceptable for me since I’ve been doing that with most logging engines for awhile.

Conclusion

Extensions are awesome if used sparingly. Is this.Log perfect? Probably not, but it does have a lot of benefits in use. Feel free to take my work and make it better. Find a way to get me away from the reflection call every time. I’ve been using it for almost a year now and have improved it a little here and there.

If there is enough interest, I can create a NuGet package with this as well.

The following is a script that I used to help me clean up a database and reduce the size of it from 95MB down to 3MB so we could use it for a development backup. I will note that we also removed some of the data out. I shared this with a friend recently and he used this to go from 70GB to 7GB!

UPDATE: Special Note

Please don’t run this against something that is live or performance critical. You want to do this where you are the only person connected to the database, like a restored backup of the critical database. Doing it against something live will most definitely cause issues. I can in no way be responsible for the use of this script. You should understand what you are doing before you execute these scripts.

So what does it do?

It gives you a report of what tables are taking up the most space.

It allows you to specify those tables for cleaning.

Gives you that same report of space used up by tables after the clean.

It rebuilds and reorganizes all indexes with reports before and after.

It runs shrink file on the physical files (potentially unnecessary due to the next thing it does, but hey, couldn’t hurt right?!).

Refresh database is an workflow that allows you to develop with a migrations framework, but deploy with SQL files. It’s more than that, it allows you to rapidly make changes to your environment and sync up with other teammates. When I am talking about environment, I mean your local development environment: your code base and the local database back end you are hitting.

Refresh database comes in two flavors, one for NHibernate and one for Entity Framework. I’m going to show you an example of the one for Entity Framework, which you can find in the repository for rh-ef on github. One note before we get started: This could work with any migrations framework that will output SQL files.

What is this? Why should I use this?

How long do you spend updating source code and then getting your database up to snuff afterward so you can keep moving forward quickly? Do you work with teammates? Do you have multiple workstations that you might work from and want to quickly sync up your work?

It’s a pain most of us don’t see and an idea that was originally incubated by Dru Sellers. He wanted a fast way of keeping his local stuff up to date right from Visual Studio. Out of that was born Refresh Database. We are talking a simple right click and debug to a synced up database.

Others have talked in the past about how you want to use the same migration algorithm and test it all the way up to production. Refresh DB allows you to test that migration from a local development environment many times a day. So by the time you hand over the SQL files for production (or use RoundhousE), there is no guess work about whether it is going to work or not. You have a security in knowing that you are good to go.

It’s definitely something that can really speed up your team so you never hear “I got latest and now I’m trying to sync up all the changes to the database.” This should be easy. This should be automatic.

You should never again hear “I made some domain changes but now I’m working to get them into the database.” This should be easy. This should be automatic.

Whether you decide to look further into this or not, it doesn’t matter to me. It just means my teams will get to market and keep updated faster than you (given the same technologies, ).

How does this work?

This is the simple part. Convincing you to look at it in the first place is the hard part. I have put together a short video to show you exactly how it works. You will see that it is super simple.

Conclusion

Refresh Database has been around for over two years. It’s definitely something that has paid for itself time and again. It’s something you might consider looking at it if you have never heard of it.

If you don’t do something with migrations and source control for your database yet, please start now. This will save you countless hours in the future. I’ve walked into more than one company that was hurting in the area of database development b/c they didn’t treat the database scripts as source code in the same way that they did the rest of the code. It’s a must anymore. I also see teams doing shared development database development. This is a huge no no (except in certain considerations) due to the amount of lost time it causes. That however, is a discussion for another day.

Make VMWare Share Part of the Local Intranet

This is one I’ve found to get stuff to build that I didn’t find anywhere else. Even after running caspol I still couldn’t run executables on the share. That is, until I made the share part of the Local Intranet.

Open Internet Explorer, then open Internet Options.

Find the Security Tab

Open Local Intranet by selecting Local Intranet and pushing the Sites button

This will allow for executables to start working. All but the ones built and run from Visual Studio.

.NET Built Executables/Services No Longer Work

It may be awhile before you run into this one. You may have a console application you are building. You will notice once you move over to the share, you start getting errors related to that. What you need to do is add a small configuration value to the the config files.

Add the following to your config files:

<runtime>
<loadFromRemoteSources enabled="true" />
</runtime>

This will allow it to be loaded into memory, otherwise it will not run from a network share.

Caveats to Network Share

Caveats to think about when developing against a share:

Visual Studio has trouble noticing updates to files if you update them outside of Visual Studio.

If you run the local built in web server for web development, don’t expect it to catch the files updating automatically.

If you do any kind of restoring a database from a backup, you may want to consider copying that database to a local drive first.