Tuesday, October 16, 2018

Puppet 5 is released and comes with several exciting
enhancements and features that promise to make configuration management much
more streamlined. This article will take a comprehensive look at these new
features and enhancements.

Puppet 5 was
released in 2017, and according to Eric Sorensen, director of product
management at Puppet, the goal was to standardize Puppet as a one-stop
destination for all configuration management requirements. Here are the four
primary goals of this release:

To standardize the version
numbering of all the major Puppet components (Puppet Agent, PuppetDB, and
Puppet Server) to 5, and deliver them as part of a unified platform

To include Hiera 5 with eyaml
as a built-in capability

To provide clean UTF-8 support

To move network communications
to fast, interoperable JSON

Customer feedback

Customer and
community feedback played a major role in setting the goals for Puppet 5’s
release, having helped the developers with identifying and defining certain
patterns, such as:

Different version numbers
across components were a huge source of confusion

There was a lot of chaos when
it came to combining components to get a working installation as well as where
each component would fit

Since both Facter 3 and
PuppetDB 3 seamlessly rolled into PC1, guaranteeing a new Puppet Collection for
every major release didn’t make much sense

However, the
makers ensured that one critical aspect didn’t get affected: Modules that worked on Puppet 4 will work
unchanged under Puppet 5.

New features

Puppet 5 comes
with some power-packed new features; have a look:

The call function: The call (name,
args,…)function has been
added, which allows you to directly call a function by its name

The unique function: Earlier, you had to include the stdlib module to include the unique function. None of those hassles
anymore! The unique function is now directly available in
Puppet 5. What’s more, the function is also capable of handling Hash and Iterable data types. In addition, you can
now also give a code block that determines whether the uniqueness has been
computed.

Enhancements

Time to take a
look at some exciting new enhancements that come with Puppet 5:

Switched from PSON to JSON as default:
Agents now download node information, catalogs, and file metadata, by default, in
JSON instead of PSON in Puppet 5. The move to JSON ensures enhanced
interoperability with other languages and tools, while also enabling better
performance, especially when the master is parsing JSON facts and reports from
agents. Plus, JSON-encoded facts can also be easily accepted in Puppet 5.

Ruby 2.4: Puppet now uses Ruby 2.4,
which ships in the puppet-agent package. All you have ensure is to reinstall user-installed Puppet
agent gems after upgrading to Puppet agent 5.0. This is necessary because of
the differences in Ruby API in Ruby 2.1 and 2.4. Further, some gems may also
need to be upgraded to versions compatible with Ruby 2.4.

HOCON gem is a dependency now: The HOCON
gem, which was previously shipped in puppet-agent package is also now a dependency of the Puppet gem.

Silence warnings with metadata.json: You
can now turn off warnings from faulty metadata.json by setting --strict=off.

Updated Puppet Module Tool dependencies:
The gem dependencies of Puppet Module Tool are updated to use puppetlabs_spec_helper1.2.0 or later, which runs metadata-json-lint as part of the validate rake task.

Hiera 5 default file: Default Hiera
5-compliant files go into confdir
and env-directory. Puppet creates appropriate v5
hiera.yaml in $confdir and $environment Moreover, if Puppet detects a hiera.yaml in either $confdir or $environment, it won’t install a new file in
either location or remove $hieradata.

Performance boosts

All these
enhancements and new features have contributed to ushering performance boosts
in a lot of aspects. The runtimes of Puppet 5 agent have decreased by 30% at
equivalent loads (that is, from an average of 8 seconds to 5.5 seconds). In
addition CPU utilization of Puppet 5 server has reduced by at least 20% as
compared to Puppet 4 in all scenarios, while the CPU utilization for Puppet 5
PuppetDB and PostgreSQL have also significantly reduced in all scenarios.

Catalog compile
times of Puppet 5 reported by Puppet Server have reduced by 7% to 10% compared
to Puppet 4. Puppet 5 can now scale to 40 percent more agents with no deterioration
in runtime performance, whereas Puppet 4 agent runtimes were disastrously long
when scaled to the same number of agents.

If you liked this article and want to learn more about
Puppet 5, you can explore Puppet 5 Cookbook – Fourth Edition. This book takes you from a basic knowledge
of Puppet to a complete and expert understanding of Puppet’s latest and most
advanced features. Puppet 5 Cookbook – Fourth Edition is for anyone who builds and administers
servers, especially in a web operations context.

( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)

Tuesday, October 9, 2018

Amidst volatile
markets, dynamic technology shifts, and ever-increasing customer demands, it is
imperative for IT organizations to develop flexible, scalable and high-quality
applications that exceed expectations and enhance productivity. A software application
has numerous moving parts, which, if not effectively maintained, will
definitely affect the final quality and end user experience.

This is where
configuration management (CM) comes into play, its purpose being to maintain
the integrity of the product or system throughout its lifecycle while also
making the deployment process controllable and repeatable in order to ensure
higher quality. Robust configuration management brings the following advantages
to the table:

Mitigates redundant tasks

Manages concurrent updates

Eliminates problems related to
configuration

Streamlines inter-team
coordination

Makes defect tracking easier

There are
several effective CM tools out there like Puppet, Chef, Ansible. And CFEngine,
which provide automation for infrastructure, cloud, compliance and security
management, integration for deployment and continuous deployment (CI / CD).
However, deciding on which tool to select for an organization’s automation
requirements is the most critical task for a sysadmin.

A lot of sysadmins
will agree that the daily chores of a sysadmin keep them from being updated
about automation. When they do spend time in learning the nuances, they come
across multiple CM tools that all offer the same benefits theoretically. This
further complicates the decision about which CM tool to choose from, especially
for people who are just getting started.

So, what is the
best tool for people who have minimal idea about automation?—Ansible—and
justifiably so! You may ask why. This article will discuss the five reasons
that make Ansible one of the most reliable and efficient CM tools out there.

An end-to-end tool to simplify automation

Ansible is an
end-to-end tool that aids performing all kinds of automation tasks, right from
controlling and visualization to simulation and maintenance. This is because
Ansible is developed in Python, which gives Ansible access to all
general-purpose language features and thousands existing Python packages that
you can use to create your own modules. With over 1300 modules, Ansible
simplifies several aspects of IT infrastructure, including web, database,
network, cloud, cluster, monitoring, and storage.

Configuration
Management: Ansible’s most attractive feature
is its playbooks, which are nothing but simple instructions/recipes meant to
guide Ansible through the task at hand. Playbooks are written in YAML and are
human-readable, which makes it all the more easier to navigate through and work
with Ansible. Playbooks enable making changes to code, while also making it
possible to manage desired states and idempotency natively.

Orchestration: Ansible,
though highly simplified, can’t be underestimated when it comes to its
orchestration power. It effortlessly integrates with any area of the IT
infrastructure, be it provisioning virtual machines (VMs) or creating firewall
rules. Moreover, Ansible comes in handy with aspects that other tools leave
gaps in, such as zero-stop and continuous updates for multitier applications
across the infrastructure.

Provisioning: With several
modules for containers (Docker) and virtualization (VMWare, AWS, OpenStack,
Azure, and Ovirt), Ansible can easily integrate with several tasks to provide
robust and efficient automation.

Faster learning curve

Enabling easy
initial configuration and installation, the learning curve related to Ansible
is extremely quick. Figure this—you can install, configure, and execute ad-hoc
commands for ‘n’ number of servers within 30 minutes, no matter what the issue
is, be it daylight savings, synchronization, root security, server updation,
and so on.

Moreover, it
takes no time, even for a beginner, to understand the syntax and workflows,
owing to the fact that it uses YAML (YAML Ain’t Markup Language). YAML is
human-readable and, therefore, extremely user-friendly and easy-to-understand.
Add to it the Python libraries and modules, you have a very simple yet quite
powerful CM tool in your hands.

Highly adaptive and flexible

Unlike legacy
infrastructure models, which take too long to converge to a fully automated
environment, Ansible is highly flexible in this regards. As the tech space
becomes increasingly dynamic, it is only understandable that the environments
have to be flexible enough to imbibe any changes without affecting the output.
Otherwise, it may lead to undesired costs, inter-team conflicts, and manual
interventions.

Ansible,
however, effortlessly adapts to mixed environments, peacefully coexisting with
partial and fully automated environments alike, while also enabling seamless
transition between models.

Full Ansible control

No agents need
to be installed at the endpoints for Ansible; all you need is an
Ansible-installed server, managing access to servers through SSH (for Linux
environments) and WINRM (Windows Remote Access) protocols. Thanks to playbooks,
all the desired settings on the hosts defined in the inventory can also run
ad-hoc via the command line without any file definitions required whatsoever.
This makes it much faster than the traditional client-server models.

Instant automation

Right from the
instant you can ping the hosts through Ansible, you can start automating your
environment immediately. It’s advisable to begin with smaller tasks, duly
following best practices, and prioritize tasks that contribute to achieving the
business goals. This will help identify and solve problems much swiftly, while
also gaining time and enhancing efficiency.

In a nutshell,
where Ansible wins over its competitors is in its simplicity—even a beginner
can master it in no time—and its powerful features that make configuration
management a cakewalk. Choosing Ansible will help heal the Achille’s Heel of
automation while also majorly enhancing productivity and efficiency.

If you found this article interesting and wish to
learn more about Ansible, you can explore Learn Ansible, an end-to-end guide that will aid you in effectively automating
cloud, security, and network infrastructure. Learn Ansible follows a hands-on approach to give you practical experience in
writing playbooks and roles, and executing them.

( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)

Monday, October 1, 2018

Google Cloud
Platform (GCP) is considered to be one of the Big 3 cloud platforms among
Microsoft Azure and AWS. GCP is widely used cloud solutions supporting AI capabilities to
design and develop smart models to turn your data into insights at a cheap,
affordable cost.

GCP offers many machine learning APIs, among which we take a look at
the 3 most popular APIs:

Cloud Speech API

A powerful API from
GCP! This enables the user to convert speech to text by using a neural network
model. This API is used to recognize over 100 languages throughout the world.
It can also support filter of unwanted noise/ content from a text, under
various types of environments. It supports context-awareness recognition,
works on any device, any platform, anywhere, including IoT. It has features like Automatic Speech Recognition (ASR), Global
Vocabulary, Streaming Recognition, Word Hints, Real-Time Audio support, Noise
Robustness, Inappropriate Content Filtering and supports for integration with
other APIs of GCP.

·Buying products and services with the
sound of your voice:Another most
popular and mainstream application of biometrics, in general, is mobile
payments. Voice recognition has also made its way into this highly competitive
arena.

·A hands-free AI assistant that knows who you are:Any mobile phone nowadays has voice recognition software in the
form of AI machine learning algorithms.

Cloud Translation API

Natural language processing (NLP) is a part of artificial intelligence that focuses on Machine Translation (MT). MT has become
the main focus of NLP group for many years. MT deals with translating text from
the source language to text in the target language. Cloud Translation API
provides a graphical user interface to translate an inputted string of a
language to targeted language, it’s highly responsive, scalable and dynamic in
nature.

This API enables
translation among 100+ languages. It also supports language detection
automatically with accuracy. It provides a feature to read a web page contents
and translate to another language, and need not be text extracted from a document.
The Translation API supports various features such as programmatic access, text
translation, language detection, continuous updates and adjustable quota, and
affordable pricing.

The following image
shows the architecture of the translation model:

In other words, the
cloud translation API is an adaptive Machine Translation Algorithm.

The most important
application of the model is the conversion of a regional language to a
foreign language.

The cost of text translation and language detection is $20 per 1
million characters.

Use cases

Now, as we have
learned about the concepts and applications of the API, let’s learn two use
cases where it has been successfully implemented:

·Rule-based Machine Translation

·Local Tissue Response to Injury and Trauma

We will discuss
each of these use cases in the following sections.

Rule-based Machine Translation

The steps to
implement rule-based Machine Translation successfully are as follows:

1.Input text

2.Parsing

3.Tokenization

4.Compare the rules to extract the meaning of prepositional phrase

5.Find word of inputted language to word of the targeted language

6.Frame the sentence of the targeted language

Local tissue response to injury and trauma

We can learn about
the Machine Translation process from the responses of a local tissue to
injuries and trauma. The human body follows a process similar to Machine
Translation when dealing with injuries. We can roughly describe the process as
follows:

1.Hemorrhaging from lesioned vessels and blood clotting

2.Blood-borne physiological components, leaking from the usually
closed sanguineous compartment, are recognized as foreign material by the
surrounding tissue since they are not tissue-specific

5.Ingrowth of blood vessels and fibroblasts, and the formation of
granulation tissue

6.Deposition of an unspecific but biocompatible type of repair
(scar) tissue by fibroblasts

Cloud Vision API

Cloud Vision API is powerful image analytic tool. It
enables the users to understand the content of an image. It helps in finding
various attributes or categories of an image, such as labels, web, text,
document, properties, safe search, and code of that image in JSON. In labels
field, there are many sub-categories like text, line, font, area, graphics,
screenshots, and points. How much area of graphics involved, text percentage,
what percentage of empty area and area covered by text, is there any image
partially or fully mapped in web are included web contents.

The document
consists of blocks of the image with detailed description, properties show that
the colors used in image is visualized. If any unwanted or inappropriate
content is removed from the image through safe search. The main features of
this API are label detection, explicit content detection, logo and landmark
detection, face detection, web detection, and to extract the text the API used Optical Character Reader (OCR) and is supported
for many languages. It does not support face recognition system.

The architecture
for the Cloud Vision API is as follows:

We can summarize
the functionalities of the API as extracting quantitative information from
images, taking the input as an image and the output as numerics and text.

The components used
in the API are:

·Client Library

·REST API

·RPC API

·OCR Language Support

·Cloud Storage

·Cloud Endpoints

Applications of the
API include:

·Industrial Robotics

·Cartography

·Geology

·Forensics and Military

·Medical and Healthcare

Cost: Free of charge for the first 1,000 units per month; after
that, pay as you go.

( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)