Red Hat

As usual, the conference began with Matthew Miller’s traditional “State of Fedora” address wherein he uses pretty graphs to confound and amaze us. Oh, and reminds us that we’ve come a long way in Fedora and we have much further to go together, still.

Next was a keynote by Cate Huston of Automattic (now the proud owners of both WordPress and Tumblr, apparently!). She talked to us about the importance of understanding when a team has become dysfunctional and some techniques for getting back on track.

After lunch, Adam Samalik gave his talk, “Modularity: to modularize or not to modularize?”, describing for the audience some of the cases where Fedora Modularity makes sense… and some cases where other packaging techniques are a better choice. This was one of the more useful sessions for me. Once Adam gave his prepared talk, the two of us took a series of great questions from the audience. I hope that we did a good job of disambiguating some things, but time will tell how that works out. We also got some suggestions for improvements we could make, which were translated into Modularity Team tickets: here and here.

IBM’s cloud strategy has gone through a number of iterations as it attempts to offer a compelling hybrid cloud to shift its customers from traditional IT architectures to modern cloud computing.

IBM is gambling those customers who have yet to embrace the public cloud fully, remain committed to private and hybrid cloud-based infrastructure, and, if they do use public clouds, they want a cloud-agnostic approach to move workloads. In July, IBM closed the $34bn purchase of Red Hat, an acquisition it hopes will finally enable it to deliver cloud-agnostic products and services.

To tie in with the completion of the acquisition of Red Hat, IBM commissioned Forrester to look at the benefits to those organisations that are both Red Hat and IBM customers.

Open source software (OSS), by definition, has source code that’s available for anyone to see, learn from, use, modify, and distribute. It’s also the foundation for a model of collaborative invention that empowers communities of individuals and companies to innovate in a way that proprietary software doesn't allow.

Enterprise open source software is OSS that’s supported and made more secure―by a company like Red Hat―for enterprise use. It plays a strategic role in many organizations and continues to gain popularity.

While hopefully the upstream Linux kernel code can be improved to benefit all distributions for low-memory Linux desktops, Fedora developers at least are discussing their options for in the near-term improving the experience. With various easy "tests", it's possible to easily illustrate just how poorly the Linux desktop responds when under memory pressure. Besides the desktop interactivity becoming awful under memory pressure, some argue that an unprivileged task shouldn't be able to cause such behavior to the system in the first place.

Jump ahead a few years to the Fourth EU AML Directive - a regulation which required compliance by June 2017 - demanding enhanced Customer Due Diligence procedures must be adhered to when cash transactions reach an aggregated amount of more than $11,000 U.S. dollars (USD). (The Fifth EU AML Directive is on the way, with a June 2020 deadline.) In New Zealand’s Anti-Money Laundering and Countering Financing of Terrorism Amendment Act of 2017 it is stated that banks and other financial entities must provide authorities with information about clients making cash transactions over $6,500 USD and international monetary wire transfers from New Zealand exceeding $650 USD. In 2018, the updated open banking European Directive on Payment Services (PSD2) that requires fraud monitoring also went into effect. And the Monetary Authority of Singapore is developing regulations regarding the use of cryptocurrencies for terrorist funding and money laundering, too.

As new technologies and infrastructure such as virtualization, cloud, and containers are introduced into enterprise networks to make them more efficient, these hybrid environments are becoming more complex—potentially adding risks and security vulnerabilities.

According to the Information Security Forum’s Global Security Threat Outlook for 2019, one of the biggest IT trends to watch this year is the increasing sophistication of cybercrime and ransomware. And even as the volume of ransomware attacks is dropping, cybercriminals are finding new, more potent ways to be disruptive. An article in TechRepublic points to cryptojacking malware, which enables someone to hijack another's hardware without permission to mine cryptocurrency, as a growing threat for enterprise networks.

To more effectively mitigate these risks, organizations could invest in automation as a component of their security plans. That’s because it takes time to investigate and resolve issues, in addition to applying controlled remediations across bare metal, virtualized systems, and cloud environments -- both private and public -- all while documenting changes.

The best way to think about this is to ask a different but related question. Why don’t we have training for developers to write code with fewer bugs? Even the suggestion of this would be ridiculed by every single person in the software world. I can only imagine the university course “CS 107: Error free development”. Everyone would fail the course. It would probably be a blast to teach, you could spend the whole semester yelling at the students for being stupid and not just writing code with fewer bugs. You don’t even have to grade anything, just fail them all because you know the projects have bugs.

Humans are never going to write bug free code, this isn’t a controversial subject. Pretending we can somehow teach people to write bug free code would be a monumental waste of time and energy so we don’t even try.

Now it’s time for a logic puzzle. We know that we can’t train humans to write bug free code. All security vulnerabilities are bugs. So we know we can’t train humans to write vulnerability free code. Well, we don’t really know it, we think we can if you look at history. The last twenty years has had an unhealthy obsession with getting humans to change their behaviors to be “more secure”. The only things that have come out of these efforts are 1) nobody likes security people anymore 2) we had to create our own conferences and parties because we don’t get invited to theirs 3) they probably never liked us in the first place.

It’s important to make the distinction between open hybrid cloud and multi-cloud environments. A hybrid cloud features coordination between the tasks running in the different environments. Multi-cloud, on the other hand, simply uses different clouds without coordinating or orchestrating tasks among them.

Red Hat solutions are certified on all major cloud providers, including Alibaba Cloud, Amazon Web Services, the Google Cloud Platform, IBM Cloud, and Microsoft Azure. As you’re defining your hybrid cloud strategy, you can be confident that you won’t be going it alone as you work with a cloud provider. You won’t be the first person to try things on Cloud x; you’ll have the promise of a proven provider that works with your hybrid architecture.

My new position has me working with Red Hat customers in the financial services industry. These customers have strict regulations for controlling access to machines. When it comes to installing OpenShift, we often are deploying into an environment that we call “Air Gapped.” What this means in practice is that all install media need to be present inside the data center, and cannot be fetched from online on demand. This approach is at odds with the conveniences of doing an on-demand repository pull of a container image. Most of the effort involves setting up intern registries and repositories, and getting X509 certificates properly created and deployed to make access to those repositories secure.

The biggest things we learned is that automation counts. When you need to modify a file, take the time to automate how you modify it. That way, when you need to do it again (which you will) you don’t make a mistake in the modification. In our case, we were following a step-by-step document that got us about halfway through before we realized we made a mistake. Once we switched from manual edits to automated, we were far more likely to rollback to a VM snapshot and roll forward to make progress. At this point, things really started getting smoother.

Generating some random statsd communication is easy, it’s text-based UDP protocol and all you need to have is netcat. However things change when statsd server is integrated with real application flodding it with thousands of packets of various attributes.

It's super easy to get lost in the world of big data technologies. There are so many of them that it seems a day never passes without the advent of a new one. Still, such fast development is only half the trouble. The real problem is that it's difficult to understand the functionality and the intended use of the existing technologies.

To find out what technology suits their needs, IT managers often contrast them. We've also conducted an academic study to make a clear distinction between Apache Hive and Apache HBase—two important technologies that are frequently used in Hadoop implementation projects.

Sysadmins have plush, easy desk jobs, right? We sit in a nice climate-controlled office and type away in our terminals, never really forced to exert ourselves. At least, it might look that way. As I write this during a heat wave here in my hometown, I'm certainly grateful for my air-conditioned office.

Being a sysadmin, though, carries a lot of stress that people don't see. Most sysadmins have some level of on call. In some, places it's a rotation. In others, it's 24/7. That's because some industries demand a quick response, and others maybe a little less. We're also expected to know everything and solve problems quickly. I could write a whole separate article on how keeping calm in an emergency is a pillar of a good sysadmin.

The point I'm trying to make is that we are, in fact, under a lot of pressure, and we need to keep it together. While in some cases profit margins are at stake, in other cases lives could be. Let's face it, in this digital world almost everything depends on a sysadmin to keep the lights on. Maintaining all of this infrastructure pushes many sysadmins (and network admins, and especially information security professionals) to the brink of burnout.

So, this article addresses how getting away from the day job can help you keep your sanity.

Rook, a storage orchestrator for Kubernetes, has released version 1.0 for production-ready workloads that use file, block, and object storage in containers. Highlights of Rook 1.0 include support for storage providers through operators like Ceph Nautilus, EdgeFS, and NFS. For instance, when a pod requests an NFS file system, Rook can provision it without any manual intervention.

Rook was the first storage project accepted into the Cloud Native Computing Foundation (CNCF), and it helps storage administrators to automate everyday tasks like provisioning, configuration, disaster recovery, deployment, and upgrading storage providers. Rook turns a distributed file system into storage services that scale and heal automatically by leveraging the Kubernetes features with the operator pattern. When administrators use Rook with a storage provider like Ceph, they only have to worry about declaring the desired state of the cluster and the operator will be responsible for setting up and configuring the storage layer in the cluster.

Mythic Beasts is a UK-based “no-nonsense” hosting provider who provide managed and un-managed co-location, dedicated servers, VPS and shared hosting. They are also conveniently based in Cambridge where I live, and very nice people to have a coffee or beer with, particularly if you enjoy talking about IPv6 and how many web services you can run on a rack full of Raspberry Pis. The “heart” of Flathub is a physical machine donated by them which originally ran everything in separate VMs – buildbot, frontend, repo master – and they have subsequently increased their donation with several VMs hosted elsewhere within their network. We also benefit from huge amounts of free bandwidth, backup/storage, monitoring, management and their expertise and advice at scaling up the service.

Starting with everything running on one box in 2017 we quickly ran into scaling bottlenecks as traffic started to pick up. With Mythic’s advice and a healthy donation of 100s of GB / month more of bandwidth, we set up two caching frontend servers running in virtual machines in two different London data centres to cache the commonly-accessed objects, shift the load away from the master server, and take advantage of the physical redundancy offered by the Mythic network.

As load increased and we brought a CDN online to bring the content closer to the user, we also moved the Buildbot (and it’s associated Postgres database) to a VM hosted at Mythic in order to offload as much IO bandwidth from the repo server, to keep up sustained HTTP throughput during update operations. This helped significantly but we are in discussions with them about a yet larger box with a mixture of disks and SSDs to handle the concurrent read and write load that we need.

Even after all of these changes, we keep the repo master on one, big, physical machine with directly attached storage because repo update and delta computations are hugely IO intensive operations, and our OSTree repos contain over 9 million inodes which get accessed randomly during this process. We also have a physical HSM (a YubiKey) which stores the GPG repo signing key for Flathub, and it’s really hard to plug a USB key into a cloud instance, and know where it is and that it’s physically secure.

The Red Hat Innovation Awards have been recurring annually every years since 2007, and the nominations for the 2020 awards are now open. The Red Hat Innovation Awards recognize organizations for the transformative projects and outstanding results they have experienced with Red Hat’s open source solutions.

Open source has helped transform technology from the datacenter to the cloud and the Red Hat Innovation Awards showcase its transformative impact in organizations around the world. Users should nominate organizations that showcase successful IT implementation and projects that made a difference using open source.

Decades before today's deep learning neural networks compiled imponderable layers of statistics into working machines, researchers were trying to figure out how one explains statistical findings to a human.

IBM this week offered up the latest effort in that long quest to interpret, explain, and justify machine learning, a set of open-source programming resources it calls "AI 360 Explainability."

The toolkit offers IBM explainability algorithms, demos, tutorials, guides and other resources to explain machine learning outcomes. IBM explained there are many ways to go about understanding the decisions made by algorithms.

“It is precisely to tackle this diversity of explanations that we’ve created AI Explainability 360 with algorithms for case-based reasoning, directly interpretable rules, post hoc local explanations, post hoc global explanations, and more,” Aleksandra Mojsilovic, IBM Fellow at IBM Research wrote in a post.

The company believes this work can benefit doctors who are comparing various cases to see whether they are similar, or an application whose loan was denied can use the research to see the main reason for rejection.

PCI policy pays a lot of attention to systems that manage sensitive cardholder data. These systems are labeled as "in scope", which means they must comply with PCI-DSS standards. This scope extends to systems that interact with these sensitive systems, and there is a strong emphasis on compartmentation—separating and isolating the systems that are in scope from the rest of the systems, so you can put tight controls on their network access, including which administrators can access them and how.

Our architecture started with a strict separation between development and production environments. In a traditional data center, you might accomplish this by using separate physical network and server equipment (or using abstractions to virtualize the separation). In the case of cloud providers, one of the easiest, safest and most portable ways to do it is by using completely separate accounts for each environment. In this way, there's no risk that a misconfiguration would expose production to development, and it has a side benefit of making it easy to calculate how much each environment is costing you per month.

When it came to the actual server architecture, we divided servers into individual roles and gave them generic role-based names. We then took advantage of the Virtual Private Cloud feature in Amazon Web Services to isolate each of these roles into its own subnet, so we could isolate each type of server from others and tightly control access between them.

By default, Virtual Private Cloud servers are either in the DMZ and have public IP addresses, or they have only internal addresses. We opted to put as few servers as possible in the DMZ, so most servers in the environment only had a private IP address. We intentionally did not set up a gateway server that routed all of these servers' traffic to the internet—their isolation from the internet was a feature!

Of course, some internal servers did need some internet access. For those servers, it was only to talk to a small number of external web services. We set up a series of HTTP proxies in the DMZ that handled different use cases and had strict whitelists in place. That way we could restrict internet access from outside the host itself to just the sites it needed, while also not having to worry about collecting lists of IP blocks for a particular service (particularly challenging these days since everyone uses cloud servers).

[...]

Although I covered a lot of ground in this infrastructure write-up, I still covered only a lot of the higher-level details. For instance, deploying a fault-tolerant, scalable Postgres database could be an article all by itself. I also didn't talk much about the extensive documentation I wrote that, much like my articles in Linux Journal, walks the reader through how to use all of these tools we built.

As I mentioned at the beginning of this article, this is only an example of an infrastructure design that I found worked well for me with my constraints. Your constraints might be different and might lead to a different design. The goal here is to provide you with one successful approach, so you might be inspired to adapt it to your own needs.

The ICS Advisory (ICSA-19-211-01) released on July 30th by the Cybersecurity and Infrastructure Security Agency (CISA) is chilling to read. According to the documentation, VxWorks is “exploitable remotely” and requires “low skill level to exploit.” Elaborating further, CISA risk assessment concludes, “Successful exploitation of these vulnerabilities could allow remote code execution.”
The potential consequences of this security breech are astounding to measure, particularly when I look back on my own personal experiences in this space, and now as an Account Executive for Embedded Systems here at SUSE.

[...]

At the time, VxWorks was the standard go-to OS in the majority of the embedded production platforms I worked with. It was an ideal way to replace the legacy stove-piped platforms with an Open Architecture (OA) COTS solution. In light of the recent CISA warning, however, it is concerning to know that many of those affected systems processed highly-classified intelligence data at home and abroad.

TLS 1.3 is the sixth iteration of the Secure Sockets Layer (SSL) protocol. Originally designed by Netscape in the mid-1990’s to serve the purposes of online shopping, it quickly became the primary security protocol of the Internet. Now not limited just to web browsing, among other things, it secures email transfers, database accesses or business to business communication.

Because it had its roots in the early days of public cryptography, when public knowledge about securely designing cryptographic protocols was limited, the first two iterations: SSLv2 and SSLv3 are now quite thoroughly broken. The next two iterations, TLS 1.0 and TLS 1.1 depend on the security of Message Digest 5 (MD5) and Secure Hash Algorithm 1 (SHA1).

Fedora Workstation is all about Gnome and it has been since the beginning, but that doesn’t mean we don’t care about Qt applications, the opposite is true. Many users use Qt applications, even on Gnome, mainly because many KDE/Qt applications don’t have adequate replacement written in Gtk or they are just used to them and don’t really have reason to switch to another one.

For Qt integration, there is some sort of Gnome support in Qt itself, which includes a platform theme reading Gnome configuration, like fonts and icons. This platform theme also provides native file dialogs, but don’t expect native look of Qt applications. There used to be a gtk2 style, which used gtk calls directly to render natively looking Qt widgets, but it was moved from qtbase to qt5-styleplugins, because it cannot be used today in combination with gtk3.

For reasons mentioned above, we have been working on a Qt style to make Qt applications look natively in Gnome. This style is named adwaita-qt and from the name you can guess that it makes Qt applications look like Gtk applications with Adwaita style. Adwaita-qt is actually not a new project, it’s been there for years and it was developed by Martin Bříza. Unfortunately, Martin left Red Hat long time ago and since then a new version of Gnome’s Adwaita was released, completely changing colors and made the Adwaita theme look more modern. Being the one who takes care of these things nowadays, I started slowly updating adwaita-qt to make it look like the current Gnome Adwaita theme and voilà, a new version was released after 3 months of intermittent work.

Friday with Infra is a new event done by CPE (Community Platform Engineering) Team, that will help potential contributors to start working on some of the applications we maintain. During this event members of the CPE team will help you to start working on those applications and help you with any issue you may encounter. At the end of this event you should be able to maintain the application by yourself.

Red Hat has joined the RISC-V Foundation to help foster this open-source processor ISA.

While we're still likely years away from seeing any serious RISC-V powered servers at least that can deliver meaningful performance, Red Hat has been active in promoting RISC-V as an open-source processor instruction set architecture and one of the most promising libre architectures we have seen over the years. Red Hat developers have already helped in working on Fedora's RISC-V support and now the IBM-owned company is helping out more and showing their commitment by joining the RISC-V Foundation.

Fedora 30 is my primary operating system for desktops and servers, so I usually try to take it everywhere I go. I was recently doing some benchmarking for kernel compiles on different cloud plaforms and I noticed that Fedora isn’t included in Google Compute Engine’s default list of operating system images.

As the lead engineer on the Power10 processor, Bill Starke already knows what most of us have to guess about Big Blue’s next iteration in a processor family that has been in the enterprise market in one form or another for nearly three decades. Starke knows the enterprise grade variants of the Power architecture designed by IBM about as well as anyone on Earth does, and is acutely aware of the broad and deep set of customer needs that IBM always has to address with each successive Power chip generation.

It seems to be getting more difficult over time, not less so, as the diversifying needs of customers run up against the physical reality of the Moore’s Law process shrink wall and the economics of designing and manufacturing server processors in the second and soon to be the third decade of the 21st century. But all of these challenges are what get hardware and software engineers out of bed in the morning. Starke started out at IBM in 1990 as a mainframe performance analysis engineer in the Poughkeepsie, New York lab and made the jump to the Austin Lab where the development for the AIX variant of Unix and the Power processors that run it is centered, first focusing on the architecture and technology of future systems and then Power chip performance and then shifting to being one of the Power chip architects a decade ago. Now, Starke has steered the development of the Power10 chip after being heavily involved in Power9 and is well on the way to mapping out what Power11 might look like and way off in the distance has some ideas about what Power12 might hold.

On Friday, International Business Machines (IBM) finally provided detailed financial projections on the Red Hat merger. The company had always provided an indication that the deal was immediately cash flow accretive while not EPS accretive until the end of year two. The headlines spooked investors, but the details should bring investors back with a smile.

Earlier this year, I wrote about a new approach my team is pursuing to inform our Container Adoption Program. We are using software delivery metrics to help keep organizations aligned and focused, even when those organizations are engaging in multiple workstreams spanning infrastructure, release management, and application onboarding. I talked about starting with a set of four core metrics identified in Accelerate: Building and Scaling High Performance Technology Organizations (by Nicole Forsgren, Jez Humble, and Gene Kim) that act as drivers of both organizational and noncommercial performance.

Let’s start to highlight how those metrics can inform an adoption program at the implementation team level. The four metrics are: Lead Time for Change, Deployment Frequency, Mean Time to Recovery, and Change Failure Rate. Starting with Lead Time and Deployment Frequency, here are some suggestions for activities that each metric can guide in initiatives to adopt containers, with special thanks to Eric Sauer, Prakriti Verma, Simon Bailleux, and the rest of the Metrics-Driven Transformation working group at Red Hat.

The Open Policy Agent Gatekeeper project can be leveraged to help enforce policies and strengthen governance in your Kubernetes environment. In this post, we will walk through the goals, history, and current state of the project.

More in Tux Machines

today's leftovers

Intel's speedy Clear Linux distribution could be running under the hood of your car.
While we're fascinated by the performance of Intel's open-source Clear Linux distribution that it offers meaningful performance advantages over other distributions while still focused on security and offering a diverse package set, we often see it asked... who uses Clear Linux? Some argue that Clear Linux is just a toy or technology demo, but it's actually more.

Radeon ROCm 2.7.2 is now available as the newest update to AMD's open-source GPU compute stack for Linux systems.
ROCm 2.7.2 is a small release that just fixes the upgrade path when moving from older ROCm releases, v2.7.2 should now be running correctly. This release comes after the recent ROCm 2.7.1 point release that had corrected some components from properly loading the ROC tracer library.

There's an exciting patch set to GNOME Shell and Mutter now pending for finally wiring up the full-screen unredirected display / full-screen bypass compositing for helping the performance of full-screen games in particular on Wayland.
GNOME on X11 has long supported the full-screen compositing bypass so the window manager / compositor gets out of the way when running full-screen games/applications. That support under Wayland hasn't been in place and thus there is a performance hit for full-screen Wayland-native software. But now thanks to Red Hat's Jonas Ådahl, that infrastructure now appears to be ready.

After almost three years of research, planning and development we're proud to present the first public version of Xabber Server. Server is licensed under GNU AGPL v3 license, source code is available on GitHub. It is a fork of superb open source source XMPP server ejabberd by ProcessOne, with many custom protocol improvements an an all-new management panel.

After a summer hiatus during which I only released new packages for KDE Frameworks because they addressed a serious security hole, I am now back in business and just released KDE-5_19.09 for Slackware-current.
The packages for KDE-5_19.09 are available for download from my ‘ktown‘ repository. As always, these packages are meant to be installed on a full installation of Slackware-current which has had its KDE4 removed first. These packages will not work on Slackware 14.2. On my laptop with slackware64-current, this new release of Plasma5 runs smooth.

Later, the County official discovered that the two men were in fact, hired by the state court administration to try to "access" court records through "various means" to find out potential security vulnerabilities of the electronic court records.

The state court administration acknowledged that the two men had been hired, but said they were not supposed to physically break into the courthouse.

Mark M5BOP reports the complete set of amateur radio technical talks from this year's Martlesham Microwave Round Table is now available to watch on YouTube
Videos of these MMRT 2019 talks are available:
• Practical GNUradio - Heather Lomond M0HMO

On the road to change, you’ll encounter fear and loathing. People will undoubtedly cling to old ways of working. Successfully making it to the other side will require commitment, passionate change agents, and unwavering leadership. You might wonder – is it really worth it?
Leaders who have made the switch to agile project management say that it has delivered benefits both large and small to their organizations, from the rituals that bring their team together – like daily stand-ups – to the results that make their business stronger – like better end products and happier customers.

Borislav Petkov has taken to improve the Linux kernel's memset function with it being an area previously criticzed by Linus Torvalds and other prominent developers.
Petkov this week published his initial patch for better optimizing the memset function that is used for filling memory with a constant byte.

In addition to the work being led by DigitalOcean on core scheduling to make Hyper Threading safer in light of security vulnerabilities, IBM and Oracle engineers continue working on Kernel Address Space Isolation to help prevent data leaks during attacks.
Complementing the "Core Scheduling" work, Kernel Address Space Isolation was also talked about at this week's Linux Plumbers Conference in Lisbon, Portugal. The address space isolation work for the kernel was RFC'ed a few months ago as a feature to prevent leaking sensitive data during attacks like L1 Terminal Fault and MDS. The focus on this Kernel ASI is for pairing with hypervisors like KVM as well as being a generic address space isolation framework.

While Intel CPUs aren't shipping with 5-level paging support, they are expected to be soon and distribution kernels are preparing to enable the kernel's functionality for this feature to extend the addressable memory supported. With that, the mainline kernel is also looking at flipping on 5-level paging by default for its default kernel configuration.
Intel's Linux developers have been working for several years on the 5-level paging support for increasing the virtual/physical address space for supporting large servers with vast amounts of RAM. The 5-level paging increases the virtual address space from 256 TiB to 128 PiB and the physical address space from 64 TiB to 4 PiB. Intel's 5-level paging works by extending the size of virtual addresses to 57 bits from 48 bits.

Using open source software is commonplace, with only a minority of companies preferring a proprietary-first software policy. Proponents of free and open source software (FOSS) have moved to the next phases of open source adoption, widening FOSS usage within the enterprise as well as gaining the “digital transformation” benefits associated with open source and cloud native best practices.
Companies, as well as FOSS advocates, are determining the best ways to promote these business goals, while at the same time keeping alive the spirit and ethos of the non-commercial communities that have embodied the open source movement for years.

Releasing Slax 9.11.0

New school year has started again and next version of Slax is here too :) this time it is 9.11.0. This release includes all bug fixes and security updates from Debian 9.11 (code name Jessie), and adds a boot parameter to disable console blanking (console blanking is disabled by default).
You can get the newest version at the project's home page, there are options to purchase Slax on DVD or USB device, as well as links for free download.
Surprisingly for me we skipped 9.10, I am not sure why :)
I also experimented with the newly released series of Debian 10 (code name Buster) and noticed several differences which need addressing, so Slax based on Debian 10 is in progress, but not ready yet. Considering my current workload and other circumstances, it will take some more time to get it ready, few weeks at least.
Also: Slax 9.11 Released While Re-Base To Debian 10 Is In Development