Approximately one year ago, we had highlighted the overabundance of information security-related issues that were prominently featured in the news. Organizations have apparently missed the memos and in-depth articles that provided valuable insight into how to avoid mistakes that impact the potential customer base. We’re only half way through this month, yet have been graced with the following gems:

Over the past few months, we’ve been putting a homegrown hyper-converged solution through its paces. The premise behind this effort started around the time that Spectre and Meltdown vulnerabilities were announced. The goals for this solution were fairly simple:

Reduce overall power consumption where possible: Traditional disks are a known quantity of consumption and don’t really have an available way to reduce power draw outside of picking units with the appropriate characteristics (lower spindle speeds, sleep mechanisms).

Simplify networking and reduce total port count: Running standalone NAS and dedicated host with a corresponding hypervisor can quickly increase the total number of network ports required for the entire solution. We had to procure additional switches to support the aforementioned configuration as each component had, at a minimum, (4) 1 Gb ethernet ports available.

Provide robust and high performance storage services: The feature sets of commercial, off the shelf NAS solutions are diverse with respect to how resilient and vigilant the underlying solution functions with respect to addressing the integrity of data. QNAP only offers ZFS in very expensive solutions, Netgear and Synology support btrfs to enhance data integrity and mitigate digital bit rot.

Achieve a cooling and noise profile that may be greater than a consumer NAS, yet will be quieter than a 2U SuperMicro rack mount server: Attempts to re-establish a FreeNAS solution using pre-existing hardware resulted in an unbearable amount of fan noise that could be heard from a far distance away. While the understanding related to cooling drives involves sufficient airflow to keep the units operating at recommended temperatures for longevity exists, the reality of how quickly multiple 80mm fans must spin to achieve this goal runs counter to keeping such a solution anywhere near living quarters. The potential to move comparable airflow using larger fans running at a lower speed was compelling, and would align with cooling solutions used by commercial vendors for SMB/home storage solutions.

Retain hot swap drive capabilities: Mechanical disks will fail. This is a fact of life. With a smaller sample set than Backblaze, our luck with WD Reds and various Seagate products has been less than spectacular. The visual indicators and ease of replacement that are inherent with hot swap solutions makes the associated maintenance activities far easier than having to remember which serial number is installed in a given bay within a mid tower or full tower case.

With these design goals in play, our solution was based around an AMD Ryzen 7 1700 processor. The 65W TDP for an eight core/sixteen thread CPU was the perfect balance of price and performance. Additional supporting data that set our direction include the following tidbits:

VMware ESXi 6.5 U1 has resolved the initial teething problems that existed during the initial introduction of the Ryzen architecture. The hacks related to disabling SMT are no longer applicable.

FreeNAS 11.1 has support for the solution if it were to be installed on bare metal. We posted our concerns related to the about-face that happened with FreeNAS Corral. However, the capability to run FreeNAS virtually with no additional costs versus the licenses required for vSAN or other software-defined storage offerings resulted in our selection of this solution to provide the virtual storage backend.

Unofficial ECC Memory Support provides an additional layer of stability and integrity for the solution. The Kingston DIMMs we had available were compatible with the solution, and a BIOS update provided the necessary switches to enable support.

The parts bin for this system beyond the aforementioned processors included the following components:

(1) Chenbro SR107 Tower Server: We selected this unit due to the available expansion options that add hot swap drive capabilities to the enclosure. An excessive number of negative reviews of the Silverstone SST-CS380’s drive cooling capabilities and precarious backplane design ruled out what may have been a better fit with some creative fabrication of airflow channels. The Chenbro case was paired with: (2) of the 4-bay, 3.5″ Hot Swap bays that reside behind the front panel door. The three 5.25″ bays were converted to provide (5) additional 3.5″ Hot Swap bays using the SK33502 solution. This provided a grand total of 13 3.5″ hot swap bays.

(1) Biostar B350GT5 motherboard: With a 16x and 4x slot available, our design goals were almost met with this board. It’s been incredibly stable, offers a dual BIOS setup that is controlled by a dip switch, and includes (2) PCI slots. We’ll delve into our findings and challenges on PCI slot operation in another post.

(1) LSI Logic 9305-16i 12Gb SAS HBA: The gold standard for FreeNAS. With the goal of virtual FreeNAS, the controller will be passed through to the guest. In addition to this compatibility, we’ll exceed the available hot swap drive bay count with a better power and cooling profile versus the 9300 controller.

(6) 5 TB hard drives and (4) 4 TB hard drives: We’ll have a total of two pools within this solution that consist of a 20 TB RAID-Z2 pool for general purpose file storage, and an 8 TB RAID-10 pool for virtualization-related tasks.

(1) 750W FSP Hydro G Gold-rated Modular PSU: We had it on hand and it works exceptionally well. Provisioning it for this solution prevents it from becoming eWaste. This model received a Gold award at HardOCP.

(1) 480 GB Crucial M5 SATA SSD: Another carryover from prior systems. The SSD is attached to one of the four onboard SATA ports provided by the B350 chipset and simply holds the hypervisor + the boot image for FreeNAS.

Although performance exceeded expectations with this setup, the noise factor remained an outstanding issue. While the chassis and associated drive bays were slightly more quiet than a 2U SuperMicro server chassis, continuous operation close to living quarters will produce enough fan noise to make this type of setup a non-starter for home lab environments. This noise, when combined with the SMB bug that was introduced in FreeNAS 11.1 U3, resulted in revisiting the prior deployment model for operations.

The start of the new year has been fraught with peril on many fronts. The tandem of Meltdown and Spectre were a considerable blow to assumed paradigms of technological safety and security. Enough time has passed to enable a more reliable stream of information related to mitigation techniques, underlying system dependencies, and potential pitfalls related to compatibility between software elements within a given system. The massive quantity of missteps along the way by Intel, AMD, Microsoft, and hardware vendors that produce solutions has raised serious concerns related to their independent abilities to process a vulnerability of this scope and to provide relief without compromising system stability or application integrity. Meltdownattack.com was implemented early in the cycle and continues to function as an excellent resource for understanding the scope of these vulnerabilities.

Patches, hotfix packages, and firmware that released during the month of January were ultimately more trouble than they were worth. What we’ll refer to as “version 1.0” of microcode and BIOS updates resulted in compromised system stability, a dramatic increase in unexpected reboots, and miscellaneous errata that has sent Intel back to the drawing board. Tier 1 manufacturers have pulled the recent updates from their respective sites and will republish once a better solution is made available. When the creator of Linux calls out Intel’s proposed fixes as garbage, the impact of Intel’s resource cuts and lack of proper investment or understanding in security become evident. AMD’s initial response on these vulnerabilities had to be walked back due to inaccuracies. A subsequent update to the page dedicated to information on this threat better aligns with the facts. As the ecosystem of vendors producing AMD-based solutions is not as prevalent in comparison to those that produce Intel-based solutions, any subsequent BIOS and related AGESA updates may take longer to be released.

Considering that Microsoft has reverted or updated a number of patches after initial release at the beginning of last month, solving this problem is going to take a longer cycle than most of the major 2017 security vulnerabilities. We believe Red Hat’s approach is the best option at this point; don’t take provided code and false assertions of a fix at face value. If the fix is worse than keeping the vulnerability present until such time that a less disruptive and more thorough solution is made available, don’t implement the fix.

A little over a week ago, the client computing space changed with the shocking announcement of an Intel CPU with integrated AMD graphics. This is not an error. In a bid to allegedly better compete with graphical market leader nVidia, the primary x86 CPU competitors teamed up to offer Intel processors cores and an appropriately-sized Radeon RX Vega GPU in a “package” unlike any prior processor launched by either manufacturer. Less than two days after this landmark development, Intel obtained a new employee which happened to be the former CTO of the Radeon Technology Group. The “sabbatical” that was announced right before the launch of the Radeon RX Vega GPU escalated quickly. Our initial concern with these strange bedfellows was twofold based on the fallout of the dissolved partnership between Apple and Imagination Technologies.

Apple relied on Imagination’s IP and solutions for multiple generations of the processors that power their tablets and phones. Eventually, Apple was able to build their own solution and no longer required Imagination’s services. Our initial perception of the Intel CPU/AMD GPU tandem in a singular package evoked thoughts of a comparable end game. Intel would use AMD’s offerings based on performance and cost until such time that an in-house GPU that offers a dramatic increase in performance would come to light. Who better to develop this than someone with extensive experience? Although the overlap between an AMD APU using the Vega graphics architecture and an Intel CPU paired with the same design may seem like a win for AMD, the reality for workloads that require high performance and minimal cores will result in the cannibalization of market share that could have been taken by the newer AMD APUs. The initial benchmarks on the only publicly available Ryzen 5 2500U-based system are interesting to say the very least. While it’s impressive to see what’s been achieved in a 15W TDP design, the lack of power optimization in comparison to Intel-based systems doesn’t necessarily close the gap in the areas that count most. Performance is certainly “good enough”, but the poor design decisions by HP around screen quality and, optimistically speaking, potential firmware that may lack the necessary optimizations for acceptable battery life don’t provide the slam dunk needed to take market share.

We’ve recently delved into learning more about Plex as a means for aggregating and consuming content from a central repository. This solution is one of many robust offerings that offer the services necessary to access a diverse media library. Recent versions of the product possess the capability to serve as a digital video recorder for over-the-air or cable content if the appropriate network-connected hardware is present. The recent acquisition of a SiliconDust HDHomeRun Connect network tuner allowed us to meet the prerequisite for this functionality. The reasonable price of this unit (~$100 through a variety of retailers) represents an incredible value on its own. The capability to combine two networked tuners with low cost or no cost software simplifies adding television viewing capability to rooms that may not possess a hard wired coaxial connection. The ability to stream OTA TV when one is outside of their local viewing area opens up the potential to view regional programming without limits. Recording those streams for later viewing is a feature that normally has some type of subscription charge for the associated appliance and ongoing use of the platform. The Windows Media Center edition has been phased out by Microsoft; the competition that is filling this void have created solutions that are mostly ready for mass market consumption.

Plex is not a free product when using advanced functionality such as Live TV streaming, DVR, and photo management/synchronization. There is a subscription charge associated with Plex Pass to enable the necessary bells and whistles. The product is iterated upon with timely and frequent updates, which provides value for the associated cost. We’ve thoroughly tested Plex on three distinct platforms over the past month, including the following configurations:

The third configuration is overkill for most use cases unless you’re going to be transcoding multiple streams simultaneously. The first configuration struggled to keep up with time shifting recorded media while simultaneously transcoding. The operating system platform that will host the Plex Media Server (PMS) along with associated services is equally capable across all available platforms. Our recommendation is to use whichever operating system you are most comfortable with as any troubleshooting efforts may involve upgrading or downgrading versions to resolve errata. Once the installation completes, the majority of administration and configuration is handled through the web interface. After you’ve signed into PMS with your Plex Pass credentials, libraries can be established, supported network or physical tuners can be scanned, and schedules can be implemented for recording shows from the antenna. The electronic programming guide may take a while to fully populate. The structure of the available listing options makes it easy to find those shows that you’re interested in, as well as to schedule the desired recording.

The capabilities to customize exactly what is recorded with Plex are well thought out and account for most circumstances. Before kicking off your first recording, verify that the time on the system is correct. Synchronize with a trusted source of time, validate that your time zone is correct, and spot-check the system time against the anticipated broadcast of a given show. If things do not line up (excluding ~30 seconds to a minute of advertising prior to the official start of the show), further modification of the PMS host’s clock may be combined with adjusting time synchronization to correct the issue. The recording process defaults will attempt to record in whichever format the broadcast presents, which can be modified to mandate only high definition content. Additional minutes can be added to record further in advance of the program’s start time as well as beyond the end of the program’s end time. This feature allows adjustments to be made on Sundays, where football games that go into overtime (or longer) end up further extending the start and end times for the content broadcast. Programs can be set to record only new airings or new and repeat airings. Both options have been tested and the results have been dependable. The last option, which is a personal favorite of ours, is the ability to restrict recordings to specific channels. If you have the appropriate setup with respect to a properly positioned and capable antenna, the potential exists to receive multiple iterations of a given channel. Some of these signals will be received directly, meaning the antenna is facing in the direction that the signal is being broadcast. Other signals that may appear to come in clear are reflective, meaning the antenna is not aimed in the general direction of the broadcast tower yet is able to pick up the signal indirectly. However, the reflective signals may be prone to interference or intermittent loss if temporary obstructions enter the field where the reflected transmission is coming from. Testing and research using the conflicting sources of information on the Internet will be required to identify stations that are prone to interference. If an alternative broadcast that is not susceptible to artifacting or temporary loss of signal exists, manually setting the specific channel will reduce the risk of missing parts of the recorded program.

Once the schedule is set, the shows record to the library designated in the basic recording setup. This is what we’d all expect until it arbitrarily stops working. Searches for “100% complete DVR Plex” or “Recording not completing Plex” highlight the price of progress. DVR functionality was in beta status in Version 1.6.1 of PMS, but worked reliably with multiple tuners. Yes, if you were so inclined, you can connect multiple networked units to the DVR functionality of Plex and record 4, 6, or 8 shows at the same time. Between version 1.6.1 and version 1.7.0, there was a transcoder change noted in the update logs that are readily available in the Plex forums. From 1.7.0 to 1.9.2 (current of this writing), the problem remains and the DVR service is not reliable. We’ve lost a number of season premieres due to this bug. If you restart the actual PMS service, you’ll normally lose the recording or end up with a small clip of the total recording. A survey that is being conducted by one of the forum members highlights this issue well. Some users of Plex within the voting sample set have moved to Emby, MythTV, or another platform. The only immediate loss of functionality within the client software (when paired with 1.6.1) is the lack of adjustment for the recording schedule through the client. Everything else works remarkably well. The photo synchronization and upload is a reliable and low cost means to dumping your phone or tablet camera roll to a central repository. The DVR function in 1.6.1 is rock solid with absolutely no hangups. We’ve been recording for over two weeks on this platform and have not lost a single show nor encountered a single transcoder error or 100% complete recording stuck condition.

Hopefully, the Plex development team can track down the anomaly with the new transcoder and make a version greater than 1.9.2 reliable. Even with the noted quirks and weeks of research and troubleshooting required to establish a reliable OTA DVR solution, we’d still recommend Plex. It’s robust, actively supported, offers hooks for advanced uses or workflows via scripting, and it keeps getting better. The support community is incredibly helpful in sharing their findings or helping to troubleshoot problems related to more complex setups. While it would be easier to have a direct channel to engage support and the developers with a traditional ticket system, the forums are the only available method of engagement. Fortunately, they’re not neglected.

The rapid influx of reviews for the Radeon RX Vega 64 and 56 have hit the web today, and the results have been rather mixed. The high level summary of reviews across multiple sites results in the following information:

AMD Radeon RX Vega 56 vs. nVidia GeForce GTX 1070 – performance edge to the RX Vega 56 in many use cases with the caveat of considerably higher operating temperatures and power draw.

AMD Radeon RX Vega 64 vs. nVidia GeForce GTX 1080 – performance edge to the GTX 1080 in many uses cases; power draw and operating temperatures on the RX Vega 64 make it a tough pill to swallow

Within minutes of being available from retailers such as Newegg and Amazon, the card was already sold out. The baseline cryptocurrency benchmarks demonstrated somewhat competitive performance that are quickly offset with operating costs that nullify some of the profitability of mining. Keep in mind, the limited supply of cards was being sold at a $100 USD price premium without the “Radeon Pack” that was supposed to help ration some portion of the available inventory for “gamers”. While Ryzen and Threadripper have received solid reviews that recognize the value proposition and capabilities of these processors, the amount of recommendations for AMD’s high-end GPU offerings were muted and specifically conditional. It appears that the missteps and mistakes encountered during the Radeon Fury launches did not provide the necessary lessons to prevent the same mistakes from happening yet again. The innovation of the Fury line was offset in part by the initially high cost of entry and manufacturing or production level issues early in the cycle. The optimizations and binning for the Fury Nano resulted in a better product that consumed less power and operated with better performance than the full-sized, air-cooled Fury line of products.

The comments section on many popular review sites offer up various viable theories as to how AMD made a rather significant misstep after successful launches of multiple CPU lines built around a fairly solid foundation. Aspects ranging from finite budgets, inadequate resources to develop the same caliber of value-rich solutions on multiple fronts, and even some finger pointing at issues with the fabrication process at partners with whom AMD is contractually obligated to do business with have been bandied about. The larger concerns that may be extrapolated or “theorycrafted” in the quantified results of the extensive analysis performed upon the Vega variants are as follows:

The insistence on utilizing HBM2 + silicon interposer in lieu of a traditional, high bandwidth bus that may be paired with high speed memory has significantly increased the power envelope and thermals beyond expectation.

If the reduced iteration of this architecture is still capable of drawing over 200W, the implementation in an APU model will either require extensive compromises to the compute allocations of the module or will demonstrate that the memory configuration is indeed the culprit of the extraordinary power draw and associated cooling requirements.

An improved yield on optimized bins of this part will provide the capability to launch a mass-market equivalent of the Fury Nano. AMD cannot afford to get the price, position, and power incorrect for this “hinted” SKU.

It is certainly just the start of driver optimization for the Vega line. Some may pin hope to the past successes of AMD’s capability to wring every last ounce of performance out of a given architecture. While it may be true that the raw compute numbers quoted for these GPUs provide hope that there will be a major revision that turns the tables and improves performance across numerous applications, doing so while drastically reducing power may not be as feasible as it was with the Polaris family of GPUs. The consistent message across hundreds of reviews provide a better understanding of what analysts and AMD have respectively forecast with respect to near-term growth prospects related to revenue and market share. This lack of competition doesn’t force nVidia’s hand in releasing the consumer variants of Volta. Without viable competition, we may be looking at GPU cycles and innovation that will slow to the point where CPU cycles and associated innovation rested until this year. Here’s to hoping that AMD can right the ship in the GPU space, especially in advance of the release of their mobile APUs.

It has been possible to pre-order AMD’s high-end desktop solutions for a few days now. For workflows that can tax eight or more cores, Ryzen Threadripper appears to provide a considerable amount of value for the hard-earned dollar. Installing and cooling a 180W TDP processor based on a platform and physical socket that is larger than usual comes with its own set of challenges. While AMD is including hardware in the package that enables utilization of well-regarded closed loop coolers, the solution used in the MSI video of a Threadripper CPU install doesn’t appear to be something that’s readily available. Noctua just announced a line of coolers that are specifically designed for this platform. Thanks to the cryptocurrency boom, obtaining a high performance graphics card near the suggested retail price has become a pipe dream. The recent release of information related to the new Radeon RX Vega line of GPUs presents additional questions. The staggered delay between the availability of the twelve and sixteen core variants of Ryzen Threadripper (August 10th), X399-based motherboards (August 10th), the Vega RX GPUs (August 14th), and the eight core Threadripper (August 31st) creates unnecessary challenges when building a new system around this platform. Ordering the motherboard and either a twelve or sixteen core Threadripper CPU in advance of the Vega launch would logically pair with a GeForce GTX 1070 or better at a considerable price premium. Four days after that purchase, buyer’s regret may settle in if and only if Vega lives up to the expectations of trading blows with GTX 1080-class GPUs. The bundled discount incentives discussed during AMD’s presentation will not deter miners from cornering the market on Vega based on its computational capabilities for the price. A sell-through of initial Vega inventory will create a gap for those that may hold out for the eight core Threadripper. It will be an exciting month to watch as all of these component officially launch and are pitted against the best that the market has to offer right now.

As this month comes to a conclusion, the known implications of the WannaCry ransomware outbreak have resulted in organizations still scrambling to ensure the necessary Windows patches are in place before the next permutation of this exploit is released into the wild. For those who prefer a Linux environment for file sharing and storage, a similar vulnerability was discovered to be lurking in Samba for seven years. Patch management must always be part of everyone’s standard operating procedure when it comes to utilizing technology. The fine folks at Synology have already released a patch to ensure the Samba implementation on their devices will no longer be vulnerable to this errata.

When vendors take responsibility and quickly address these types of issues, the relationship built between customer and supplier is further solidified. Over the past few years, many brick and mortar retailers have failed to prevent the types of breaches that manifested and persisted over extended periods of time due to malware leveraging unpatched exploits. The whipping boy of this week happens to be Chipotle Mexican Grill. It’s a shame that there aren’t any reference case studies or peers within the retail industry that would have information relating to best practices to prevent this type of incident from occurring again. Oh wait… there are.

The fine folks at Home Depot owned their mistake and provided credit monitoring services for customers that were affected during a comparable breach in 2014. Chipotle’s take? They’re opting to kick the can and are more than happy to provide links where customers can request free reports, but they’re not willing to “own” their mistake as T-Mobile, Home Depot, Wendy’s, and any other retailers that place the appropriate value upon its customer base have done. For the new 2017 premium customers pay for “quality ingredients” and “an ethical supply chain”, it would be far more beneficial to include a side of proactive security and monitoring at no additional cost.

Based on the response for this event, if you still enjoy the food and want to support the company that won’t support its inconvenienced customers (scope of inconvenience will manifest as the payment data that was stolen is abused), we’d strongly recommend one of the following options:

1.) Old school payment methodology: Cash is king, and the risk of your payment information being stolen or misused quickly trends to zero.

2.) Gift cards: Visit your local retailer that sells Chipotle gift cards and use a cash-back credit card at this non-Chipotle retailer to obtain an alternate form of currency that doesn’t require cash or a trip to the ATM. The rewards program at the retailer of choice may be combined with another incentive program to further stretch your dollar while reducing the risk of exposing your payment information.

Over the past week, two major security announcements were made; one pertains to Intel’s Active Management Technology (AMT) and the other addresses the presence of malware on IBM’s USB media that is used to initialize their Storwize arrays. Intel’s errata is far more concerning as its reach extends to hardware solutions that will no longer be under any type of support from a variety of manufacturers. Out of band management platforms may not receive the appropriate level of scrutiny from IT departments and the OEMs that produce such offerings. Premium interfaces from companies such as HP Enterprise and Dell EMC incur further costs via additional hardware or licenses that may be required for enabling full functionality. The fact that their solutions reside outside of the domain of the native system stack and receive regular updates that promptly address security or performance related issues will further justify the added cost in light of Intel’s egregious error. Disablement of the associated technology on a per-platform basis is recommended for mitigating the risks with AMT. IBM’s error is considerably more amateur, as it alludes to the fact that baseline client security suites were either nowhere to be found or were disabled on the system(s) used to construct the boot media. The annoyance of missing files when certain solutions are overly aggressive (looking specifically at you Trend Micro) in scanning and filtering data transmitted to and from external media would have been of benefit in this situation.

Major shakeups and drama have transpired over at ixSystems, the makers of FreeNAS. The combination of internal politics, lack of individuals with a spine to challenge the status quo, and the departure of the CTO (along with dismissal or departure of other resources) has made FreeNAS Corral the shortest-lived NAS platform in recent memory. Due to hectic schedules, the announcements and subsequent decisions to “reverse course while moving forward” has resulted in the following developments:

FreeNAS Corral is no more; 10.0.4 went from Prod to “Experimental/Test/Eval” with a complete lack of communication via e-mail.

FreeNAS 9.10.2 (and jail hell for CrashPlan) is the gold standard once again.

FreeNAS 9.10.3 becomes FreeNAS 11, and doesn’t reach feature parity with what was FreeNAS Corral. The refreshed UI being offered as a work in progress doesn’t look terrible, but it’s not the “new hotness” that was Corral’s UI.

FreeNAS 9.10.4 becomes FreeNAS 11.1, which aims to be closer to the feature set of FreeNAS Corral.

Bugs and errata will always be faced by those who ride the bleeding edge of technology. Some things (i.e. the UI elements for establishing iSCSI connectivity) were not functional in Corral, but viable workarounds using the CLI existed. Getting the taste of how much better things can be with a native implementation of Docker makes reverting to 9.10.2 (or the release candidate of 11) a non-starter. Arbitrary and unjustified changes to a statically configured jail require time and troubleshooting to resolve; time that would be better spent not doing so when things can “just work”. Processing both sides of the story, the fact that those with legitimate concerns didn’t speak up during the development of FreeNAS Corral raises questions about the internal structure of the organization. Making change for the sake of doing something different can be disastrous. Raising concerns about deficiencies that will arise during the “long game” takes courage; far more courage than removing a headphone jack. If factional silos were enabled and persisted during such a major platform transformation, what prevents such behavior and outcomes from occurring again? Would QNAP, Synology, Dell EMC, Nexenta, or others retain customers after pulling such an about-face less than 60 days after blessing a product release as generally available or “production”?

Doing the data relocation hokey pokey yet again to revert to a supported release is not appealing to our organization, yet it becomes inevitable based upon the features we’ve utilized within the provided product. Our long-term move will be to return to a commercial, off-the-shelf solution to avoid this type of disruption from happening again. The decision making challenge we face is as follows:

Synology offers btrfs natively, which provides comparable benefit to ZFS related to prevention of data corruption. However, the newer SMB offerings are still using the Intel Atom C2000 processor which has its own set of flaws.

QNAP does not offer btrfs natively, but does offer highly expandable models and can function as a hyper converged appliance (Virtualization Station + Container Station). The QNAP solution that does offer ZFS is outside of scope due to price.

The FreeNAS that we built last year has been upgraded to FreeNAS Corral. The teething problems that are inherent with a new release were more prominent than expected. While the core functionality and stability was very good, the ground-up rewrite has certainly led to some unexpected challenges. Over the past few weeks, ixSystems has done an amazing job of tackling the initial errata. At this point, the product now feels as if its ready for standard production use cases. Although some of the missing or non-existing GUI elements that were phased out as part of the new version may require some searching or experimentation to leverage the more advanced functionality of this product, the time and effort is certainly worth it. The CrashPlan Docker container alone is worth the price of admission since it is incredibly easy to set up. Anomalies that occurred when running said solution in a jail using the CrashPlan 9.x tree are now a thing of the past. The pitch of free hyper-convergence is certainly attractive. While we haven’t explored the finer nuances of the bhyve hypervisor just yet, FreeNAS Corral has delivered the goods in an elegant and powerful package.