You are here

Feeds

Finally we have came to the last chapter of this pygame project, in this chapter we will put three things on the game scene, a text which indicates what is the power level of the player, a game level indicator as well as a power bar which shows the power level of that player in the graphic form. We will need to edit three files in order to create such a mechanism. The first file we need to edit is...

In this chapter, we will further modify the previous plot_stock_technical method which uses to plot the Bollinger Bands graph. We will create a few combo boxes that offer various extra parameters that will be used to plot the Bollinger Band for any selected stock. The result is amazing after this edit.

Developed by John Bollinger, Bollinger Bands® are volatility bands placed above and below a moving average. Volatility is based on the standard deviation, which changes as volatility increases and decreases. The bands automatically widen when volatility increases and contract when volatility decreases. Their dynamic nature allows them to be used on different securities with the standard settings. For signals, Bollinger Bands can be used to identify M-Tops and W-Bottoms or to determine the strength of the trend. Signals derived from narrowing BandWidth are discussed in the ChartSchool article on BandWidth.

Here is the modify version of the current ongoing Stock and Forex project.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

The qtquick2 style framework is now listed as a runtime dependency for Plasma, which should hopefully provide a helpful hint to packagers that it needs to be installed and reduce the incidence of people who use certain distros seeing ugly user interfaces on QML/Kirigami apps (Kai Uwe Broulik, KDE Plasma 5.16.0)

In Kate, KWrite and other apps like KDevelop that use the KTextEditor framework, text selection now works like it does everywhere else: when text is selected, the left arrow key moves the insertion point one character to the left of the beginning of the selection, and the right arrow key moves the insertion point one character to the right of the end of the selection (Loh Tar, KDE Frameworks 5.57)

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Drush 9 has removed dynamic site aliases. Site aliases are hardcoded in YAML files rather than declared in PHP. Sadly, that means that many tricks you could do with the declaration of the site aliases are no longer available.

The only grouping possible is based on the YAML filename. So for example, with the Acquia Cloud Site Factory site aliases generated by the 'blt recipes:aliases:init:acquia' command, you can run a command on the same site across different environments.

But what you can't do is run a command on all the sites in one environment.

One use case for this is checking whether a module is enabled on any sites, so you know that it's safe to remove it from the codebase.

Currently, this is quite a laborious process, as 'drush pm-list' needs to be run for each site.

With environment aliases, this would be a one liner:

drush @hypothetical-env-alias pm-list | ag some_module

('ag' is the very useful silver searcher unix command, which is almost the same as the also excellent 'ack' but faster, and both are much better than grep.)

In the meantime, a much simpler solution is to use xargs, which I have recently found is extremely useful in all sorts of situations. Because this allows you to run one command multiple times with a set of parameters, all you need to do is pass it a list of site aliases. Fortunately, the 'drush sa' command has lots of formatting options, and one of them gives us just what we need, a list of aliases with one on each line:

drush sa --format=list

That gives us all the aliases, and we probably don't want that. So here's where ag first comes in to play, as we can filter the list, for example, to only run on live sites (I'm using my ACSF aliases here as an example):

drush sa --format=list| ag 01live

Now we have a filtered list of aliases, and we can feed that into xargs:

drush sa --format=list| ag 01live | xargs -I % drush % pm-list

Normally, xargs puts the input parameter at the end of its command, but here we want it inserted just after the 'drush' command. The -I parameter allows us to specify a placeholder where the input parameter goes, so:

xargs -I % drush % pm-list

says that we want the site name to go where the '%' is, and means that that xargs will run:

drush SITE-ALIAS pm-list

with each value it receives, in this case, each site alias.

Another thing we will do with xargs is set the -t parameter, which outputs each actual command it executes on STDERR. That acts as a heading in the output, so we can clearly see which site is outputting what.

Finally, we can use ag a second time to filter the module list down to just the module we want to find out about:

The nice thing about the -t parameter is that as it's STDERR, it's not affected by the final pipe to ag for filtering output. So the output will consist of the drush command for the site, followed by the filtered output.

And hey presto.

In conclusion: dynamic site aliases in Drush were nice, but the maintainers removed them (as far as I can gather) because they were a mess to implement, and removing them vastly simplified things. Doing the equivalent with xargs took a bit of figuring out, but once you know how to do it, it's actually a much more powerful way to work with multiple sites at once.

This post may feel like an advert – to an extent it is. I won’t plug events very often; however as a charity event in aid of prostate cancer and I genuinely think it will be great fun for anybody able to take part, so hence the unashamed plug. The Race for Science will happen on 30th March in and around Cambridge. There is still time to enter, have fun and raise some money for a deserving charity at the same time.

A few weeks ago friends of ours, Jo (Randombird) and Josh were celebrating their birthday’s, we split into a couple of groups and had a go at a couple of escape room challenges at Lock House Games. Four of us, Josh, Isy, Jane and myself had a go at the Egyptian Tomb. Whilst this was the 2nd time Isy has tried an escape room the rest of us were N00bs. I won’t describe the puzzles inside because that would spoil anyone’s enjoyment who would later goon to play the game. We did make it out of the room inside our allotted time with only a couple of hints. It was great fun, and is suitable for all ages our group ranged from 12 to 46, and this would work well for all adult teams as well as more mature children..

I volunteered my time and re-arranged my Monday so that I could spend the afternoon to trial the escape room elements of the challenge. Some great people involved and the puzzles were very good indeed. I visited 3 different locations in Cambridge each with very different puzzles to solve. One challenge stood head and shoulders above the others; not that the others were poor – they are great, at least as good as a commercial escape room. The reason that this challenge that was so outstanding wasn’t the because of the challenge itself, it was because it had been set specifically for the venue that will host this challenge, using the venue as part of the puzzle (The Race For Science is a scavenger hunt and contains several challenges at different locations within the city).

Beta testing one of the puzzles

Last Thursday I went back into town in the afternoon to beta test yet another location – Once again this was outstanding, the challenge differed from those that I had trialed the previous week. This was another fantastic puzzle to solve and took full advantage of its location, both for it’s ‘back story’ and for the puzzle itself. After this we moved onto testing the scavenger hunt part of the event. This is an played on the streets of the city on foot following clues will take you in and around some of the city’s museums and landmarks – and will unlock access to the escape room challenges I had been testing earlier. My only concern is that it is played using a browser on a mobile device (i.e. phone). I had to move around a bit in some locations to ensure that I had adequate signal. You may want to make sure that you have fully charged battery!

The event is open to teams of up-to 6 people and will take the form of an “immersive scavenger hunt adventure”. Unfortunately I can not take part as I have already played the game, but there is still time for you to register and take part. Anyway if you are able to get to Cambridge at the end of the month please enter the Race for Science

If digital connectivity provided the spark, it ignited because the kindling was already everywhere. The way forward is not to cultivate nostalgia for the old-world information gatekeepers or for the idealism of the Arab Spring. It’s to figure out how our institutions, our checks and balances, and our societal safeguards should function in the 21st century—not just for digital technologies but for politics and the economy in general. This responsibility isn’t on Russia, or solely on Facebook or Google or Twitter. It’s on us.

How to Build a Social Network with Drupal: The 5 Essential Modules You Will Need
radu.simileanu
Sat, 03/16/2019 - 09:24

Planning to build a social network with Drupal? A business community maybe? A team or department collaborating on an intranet or portal? Or a network grouping multiple registered users that should be able to create and edit their own content and share their knowledge? What are those key Drupal 8 modules that would help you get started?

That would help you lay the groundwork...

And there are lots of social networking apps in Drupal core and powerful third-party modules that you could leverage, but first you need to set up your essential kit.

The first obvious use-case is displaying delays, platform changes, etc in the timeline and reservation details views, and notifying about such changes.
This is already implemented for trains based on KPublicTransport,
and to a very limited extend (gate changes) for flights using KPkPass
for Apple Wallet boarding passes containing a working update API endpoint.

Train delay and platform changes in the train trip details page

For KDE Itinerary to check for changes you either need to use the “Check for Updates” action in the global action drawer, or
enable automatic checking in the settings. KDE Itinerary will not reach out to online services on its own by default.

When enabled, the automatic polling tries to adapt the polling frequency to how far away an arrival or departure is, so you get current information
within minutes without wasting battery or bandwidth. This still might need a bit of fine tuning and/or support for more corner cases
(a departure delay past the scheduled arrival time was such a case for example), so feedback on this from use in practice is very much welcome.

Whenever changes are found, KDE Itinerary will also trigger a notification, which should work
on all platforms.

Train delay notification on Android

Querying online services for realtime data might result in additional information that were previously
not included in the timeline data, geo coordinates for all stations along the way for example. KDE Itinerary will try to
augment the existing data with whatever new information we come across this way, having realtime data polling enabled
will therefore also result in more navigation options or weather forecasts for more locations being shown.

Alternative connections

The second integration point currently being worked on is selecting alternative train connections, for example when
having missed a connection or when having an unbound reservation that isn’t specific to a trip to begin with.

This is available in the context drawer on the details page of the corresponding train reservation. KDE Itinerary
will then query for journeys along the same way as the current reservation, and allows you to pick one of the results
as the new itinerary for this trip. Realtime information are of course shown here too, if available.

Three alternative connections options for a train trip, the first is expanded to show details

While displaying the connections already works, actually saving the result is still missing. Nevertheless feedback is already
useful to see if sensible results are returned for existing bookings.

Filling gaps

The third use-case is filling gaps in the itinerary, such as how to get from the airport to the hotel by local transport.
Or similarly, determining when you have to leave from your current location to make it to the airport in time for your flight.
This would result in additional elements in the timeline containing suggested public transport routes.

Implementation on this hasn’t started yet, technically it’s a variations of the journey query already used in the previous point.
The bigger challenge therefore will likely be on presenting this in a usable and useful way.

Contribute

This is all work in progress, so it’s the best point in time to influence things, any input or help is very much welcome
of course. See our Phabricator workboard
for what’s on the todo list, for coordinating work and for collecting ideas. For questions and suggestions, please feel free
to join us on the KDE PIM mailing list or in the #kontact channel on Freenode or Matrix.

If you happen to know a source for realtime flight information that could be usable by Free Software without causing recurring cost
I’d also be thankful for hints :)

Hello people, after a while of rest I have started a new python project today which I believe will take almost a month to complete. I will update the progress of this project at least a few times per week on this website so make sure you visit this website constantly to receive the latest project update.

In this chapter, we will just create the new project and then writing a few lines of python code that will create a tkinter pop up windows. As you might have guessed from the above title, yes we are going to use the Visual Studio 2019 RC IDE to create this latest python project. I have downloaded the IDE and most of the components yesterday, and today I am ready to get to work, not only for this python project but also for two other projects that will create the application for windows 10 with c sharp and javascript.

If you have not yet downloaded the visual studio 2019 then you can head over to the official website of visual studio to download the installer which will then help you to install visual studio as well as all the python components that are needed in our project. The download size of the python components are not that huge so you should have no problem to download all the python components that you need for this project.

The download size for the python components is around 1.84 GB

After you have downloaded the visual studio IDE plus the python components we can then start a brand new python project.

Select Python Application when create a new project in visual studio

The next step you need to do is to fill up all the project details and press the next button then the create button to create a brand new python project.

Fill up the project details

You can clearly see your project files from the solution explorer.

The file name is YouData.py

Now you can start to type in the below code to the code editing panel. After that select Debug->Start Debugging.

Hello people, this will be the last article which we will solve a simple question in CodeWars, in the next chapter we will start our next python project. In this chapter, we will solve one of the questions in CodeWars and the question goes like this.

Given a number, find the reverse order’s numbers of that number until one and put them in a list. For example, number 6 will make the method to return a list of reverse numbers up until 1, [6,5,4,3,2,1]. Below is the solution.

So we have solved that simple question on CodeWars and we have received 2 kyu from our hard work. With that, we have temporary concluded the question solving chapter and will start our next python project in the next chapter.

Guido van Rossum recently put together an
excellent post
talking about the value of infix binary operators in making certain kinds of
operations easier to reason about correctly.

The context inspiring that post is a python-ideas discussion regarding the
possibility of adding a shorthand spelling (x = a + b) to Python for the
operation:

x = a.copy()
x.update(b)

The PEP for that proposal is still in development, so I'm not going to link to
it directly [1], but the paragraph above gives the gist of the idea. Guido's
article came in response to the assertion that infix operators don't improve
readability, when we have plenty of empirical evidence to show that they do.

Where this article comes from is a key point that Guido's article mentions,
but doesn't emphasise: that those readability benefits rely heavily on
implicitly shared context between the author of an expression and the readers
of that expression.

Without a previous agreement on the semantics, the only possible general answer
to the question "What does x = a + b mean?" is "I need more information to
answer that".

The use case for + in Python that most closely corresponds with algebra is
using it with numbers - the key differences lie in the meaning of =, rather
than the meaning of +.

So if the additional information supplied is "This is a Python assignment
statement; a and b are both well-behaved finite numbers", then the
reader will be able to infer that x will be the sum of the two numbers.

Inferring the exact numeric type of x would require yet more information
about the types of a and b, as types implementing the numeric +
operator are expected to participate in a type coercion protocol that gives
both operands a chance to carry out the operation, and only raises TypeError
if neither type understands the other.

The original algebraic meaning then gets expressed in Python as
assert x == a + b, and successful execution of the assignment statement
ensures that assertion will pass.

In this context, types implementing the + operator are expected to provide
all the properties that would be expected of the corresponding mathematical
concepts (a + b == b + a, a + (b + c) == (a + b) + c, etc), subject
to the limitations of performing calculations on computers that actually exist.

If the given expression used uppercase letters, as in X = A + B, then the
additional information supplied may instead be "This is a matrix algebra
expression". (It's a notational convention in mathematics that matrices be
assigned uppercase letters, while lowercase letters indicate scalar values)

For matrices, addition and subtraction are defined as only being valid between
matrices of the same size and shape, so if X = A - B were to be supplied as
an additional constraint, then the implications would be:

The numpy.ndarray type, and other types implementing the same API, bring the
semantics of matrix algebra to Python programming, similar to the way that the
builtin numeric types bring the semantics of scalar algebra.

This means that if the additional information supplied is "This is a Python
assignment statement; A and B are both matrices of the same size and
shape containing well-behaved finite numbers", then the reader will be able to
infer that X will be a new matrix of the same shape and size as matrices
A and B, with each element in X being the sum of the corresponding
elements in A and B.

As with scalar algebra, inferring the exact numeric type of the elements of
X would require more information about the types of the elements in A
and B, the original algebraic meaning gets expressed in Python as
assert X == A + B, successful execution of the assignment statement
ensures that assertion will pass, and types implementing + in this context
are expected to provide the properties that would be expected of a matrix in
mathematics.

Mathematics doesn't provide a convenient infix notation for concatenating two
strings together (aside from writing their names directly next to each other),
so programming language designers are forced to choose one.

While this does vary across languages, the most common choice is the one that
Python uses: the + operator.

This is formally a distinct operation from numeric addition, with different
semantic expectations, and CPython's C API somewhat coincidentally ended up
reflecting that distinction by offering two different ways of implementing
+ on a type: the tp_number->nb_add and tp_sequence->sq_concat slots.
(This distinction is absent at the Python level: only __add__, __radd__
and __iadd__ are exposed, and they always populate the relevant
tp_number slots in CPython)

The key semantic difference between algebraic addition and string concatenation is
that in algebraic addition, the order of the operands doesn't matter
(a + b == b + a), while in the string concatenation case, the order of the
operands determines which items appear first in the result (e.g.
"Hello" + "World" == "HelloWorld" vs "World" + "Hello" == "WorldHello").
This means that a + b == b + a being true when concatenating strings
indicates that either one or both strings are empty, or else the two strings are
identical.

Another less obvious semantic difference is that strings don't participate in
the type coercion protocol that is defined for numbers: if the right hand
operand isn't a string (or string subclass) instance, they'll raise
TypeError immediately, rather than letting the other operand attempt the
operation.

Python goes further than merely allowing + to be used for string
concatenation: it allows it to be used for arbitrary sequence concatenation.

For immutable container types like tuple, this closely parallels the way
that string concatenation works: a new immutable instance of the same type is
created containing references to the same items referenced by the original
operands:

Mutable sequence types add yet another variation to the possible meanings of
+ in Python. For the specific example of x = a + b, they're very similar
to immutable sequences, creating a fresh instance that references the same items
as the original operands:

The other difference is that where + remains restrictive as to the
container types it will work with, += is typically generalised to work
with arbitrary iterables on the right hand side, just like the
MutableMapping.extend() method:

Amongst the builtins, list and bytearray implement these semantics
(although bytearray limits even in-place concatenation to bytes-like
types that support memoryview style access). Elsewhere in the standard
library, collections.deque and array.array are other mutable sequence
types that behave this way.

Multisets are a concept in mathematics that allow for values to occur in a set
more than once, with the multiset then being the mapping from the values
themselves to the count of how many times that value occurs in the multiset
(with a count of zero or less being the same as the value being omitted from
the set entirely).

While they don't natively use the x = a + b notation the way that scalar
algebra and matrix algebra do, the key point regarding multisets that's relevant
to this article is the fact that they do have a "Sum" operation defined, and the
semantics of that operation are very similar to those used for matrix addition:
element wise summation for each item in the multiset. If a particular value is
only present in one of the multisets, that's handled the same way as if it were
present with a count of zero.

Python's dictionaries are quite interesting mathematically, as in mathematical
terms, they're not actually a container. Instead, they're a function mapping
between a domain defined by the set of keys, and a range defined by a multiset
of values. Attempting to define algebraic operations on them is thus a bit like
attempting to define meaningful algebraic operations on "def f(): pass".

This means that unlike the introduction of collections.Counter (which was
grounded in the semantics of mathematical multisets and borrowed its Python
notation from element-wise addition on matrices), or introducing the matrix
multiplication operator (which was grounded in the semantics of matrix algebra,
and only needed a text-editor-friendly symbol assigned, similar to using *
instead of × for scalar multiplication and / instead of ÷ for
division), any binary in-fix operator support for merging dictionaries would
be blazing completely new conceptual trails not previously found in either
mathematics or in other mainstream programming languages.

That seems to me like a big leap to take for something where the in-place form
already has a perfectly acceptable spelling (d1.update(d2)), and a more
expression-friendly variant could be provided as a new dictionary class method:

With that defined, then the exact equivalent of the proposed d1 + d2 would
be type(d1).from_merge(d1, d2), and in practice, you would often give the
desired result type explicitly rather than inferring it from the inputs
(e.g. dict.from_merge(d1, d2)).

However, the PEP is still in the very first stage of the discussion and review
process, so it's entirely possible that by the time it reaches python-dev
it will be making a more modest proposal like a new dict class method,
rather than the current proposal of operator syntax support.

[1]The whole point of the python-ideas phase of discussion is to get a PEP
ready for a more critical review by the core development team, so it isn't
fair to the PEP author to invite wider review before they're ready for it.

In this article, we will create a simple python program which will remove the exclamation mark from a string based on the second parameter which specifies how many exclamation marks should be removed from the beginning till the end of that string.

If we enter (‘Hi!!’, 1000) into the above program, that method will only remove two ‘!’ because that is the total exclamation mark the string has. And if we enter (“Hi!!!!!”, 2) into the above method then the program will only remove 2 exclamation marks in the string and leaves the other exclamation mark untouched. If you have a better solution do leave your solution in the comment box below this post.

In this chapter, we will make a final adjustment of the main menu page buttons so they will be positioned nicely on the game scene. We need to edit three files in this chapter. We will adjust the button positions on the start scene class. from BgSprite import BgSprite from GameSprite import GameSprite from pygame.locals import * from pygame import math as mt import pygame import pygame.

In this chapter, we will continue to develop our Forex/ Stock application by introducing another button which after it has been clicked will call a method to make an API call to retrieve the sector performance data that will then get plotted on a graph.

The eight release of littler as a CRAN package is now available, following in the thirteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. And it is (in my very biased eyes) better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript converted to rather recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!).

This release brings an small update (thanks to Gergely) to scripts install2.r and installGithub.r allow more flexible setting of repositories, and fixes a minor nag from CRAN concerning autoconf programming style.

The NEWS file entry is below.

Changes in littler version 0.3.6 (2019-01-26)

Changes in examples

The scripts installGithub.r and install2.r get a new option -r | --repos (Gergely Daroczi in #67)

Lately I moved my blog to Pelican. I really like how simple and flexible it is.
So in this post I'd like to highlight one particular aspect of my Pelican's
workflow: how to setup a Debian-based environment to build your Pelican's
website, and how to leverage Pelican's Makefile to transparently use this build
environment. Overall, this post has more to do with the Debian tooling, and
little with Pelican.

Introduction

First thing first, why would you setup a build environment for your project?

Imagine that you run Debian stable on your machine, then you want to build
your website with a fancy theme, that requires the latest bleeding edge
features from Pelican. But hey, in Debian stable you don't have these shiny new
things, and the version of Pelican you need is only available in Debian
unstable. How do you handle that? Will you start messing around with apt
configuration and pinning, and try to install an unstable package in your
stable system? Wrong, please stop.

Another scenario, the opposite: you run Debian unstable on your system. You
have all the new shiny things, but sometimes an update of your system might
break things. What if you update, and then can't build your website because of
this or that? Will you wait a few days until another update comes and fixes
everything? How many days before you can build your blog again? Or will you
dive in the issues, and debug, which is nice and fun, but can also keep you
busy all night, and is not exactly what you wanted to do in the first place,
right?

So, for both of these issues, there's one simple answer: setup a build
environment for your project. The most simple way is to use a chroot, which
is roughly another filesystem hierarchy that you create and install somewhere,
and in which you will run your build process. A more elaborate build
environment is a container, which brings more isolation from the host system,
and many more features, but for something as simple as building your website on
your own machine, it's kind of overkill.

So that's what I want to detail here, I'll show you the way to setup and use a
chroot. There are many tools for the job, and for example Pelican's official
documentation recommends virtualenv, which is kind of the standard Python
solution for that. However, I'm not too much of a Python developer, and I'm
more familiar with the Debian tools, so I'll show you the Debian way instead.

To create a basic, minimal Debian system, the usual command is debootstrap.
Then in order to actually use this new system, we'll use schroot. So be
sure to have these two packages installed on your machine.

sudo apt install debootstrap schroot

It seems that the standard location for chroots is /srv/chroot, so let's
create our chroot there. It also seems that the traditional naming scheme for
these chroots is something like SUITE-ARCH-APPLICATION, at least that's what
other tools like sbuild do. While you're free to do whatever you want, in
this tutorial we'll try to stick to the conventions.

And there we are, we just installed a minimal Debian system in $SYSROOT, how
easy and neat is that! Just run a quick ls there, and see by yourself:

ls ${SYSROOT:?}

Now let's setup schroot to be able to use it. schroot will require a bit of a
configuration file that tells it how to use this chroot. This is where things
might get a bit complicated and cryptic for the newcomer.

So for now, stick with me, and create the schroot config file as follow:

Here, we tell schroot who can use this chroot ($LOGNAME, it's you), as normal
user and root user. We also say where is the chroot directory located, and that
we want an overlay, which means that the chroot will actually be read-only,
and during operation a writable overlay will be stacked up on top of it, so
that modifications are possible, but are not saved when you exit the chroot.

In our case, it makes sense because we have no intention to modify the build
environment. The basic idea with a build environment is that it's identical for
every build, we don't want anything to change, we hate surprises. So we make it
read-only, but we also need a writable overlay on top of it, in case some
process might want to write things in /var, for example. We don't care about
these changes, so we're fine discarding this data after each build, when we
leave the chroot.

In this command, we log into the source chroot as root, and we install the two
packages make and pelican. We also clean up after ourselves, to save a bit
of space on the disk.

At this point, our chroot is ready to be used. If you're new to all of this,
then read the next section, I'll try to explain a bit more how it works.

A quick introduction to schroot

In this part, let me try to explain a bit more how schroot works. If you're
already acquainted, you can skip this part.

So now that the chroot is ready, let's experiment a bit. For example, you might
want to start by listing the chroots available:

$ schroot -l
chroot:buster-amd64-pelican
source:buster-amd64-pelican

Interestingly, there are two of them... So, this is due to the overlay thing
that I mentioned just above. Using the regular chroot (chroot:) gives you
the read-only version, for daily use, while the source chroot (source:)
allows you to make persistent modifications to the filesystem, for install and
maintenance basically. In effect, the source chroot has no overlay mounted on
top of it, and is writable.

So you can experiment some more. For example, to have a shell into your regular
chroot, run:

$ schroot -c chroot:buster-amd64-pelican

Notice that the namespace (eg. chroot: or source:) is optional, if you
omit it, schroot will be smart and choose the right namespace. So the command
above is equivalent to:

$ schroot -c buster-amd64-pelican

Let's try to see the overlay thing in action. For example, once inside the
chroot, you could create a file in some writable place of the filesystem.

Not only are all your files available in the chroot, you can also create new
files, delete existing ones, and so on. It doesn't even matter if you're inside
or outside the chroot, and the reason is simple: by default, schroot will mount
the /home directory inside the chroot, so that you can access all your files
transparently. For more details, just type mount inside the chroot, and see
what's listed.

So, this default of schroot is actually what makes it super convenient to use,
because you don't have to bother about bind-mounting every directory you care
about inside the chroot, which is actually quite annoying. Having /home
directly available saves time, because what you want to isolate are the tools
you need for the job (so basically /usr), but what you need is the data you
work with (which is supposedly in /home). And schroot gives you just that,
out of the box, without having to fiddle too much with the configuration.

If you're not familiar with chroots, containers, VMs, or more generally bind
mounts, maybe it's still very confusing. But you'd better get used to it, as
virtual environment are very standard in software development nowadays.

But anyway, let's get back to the topic. How to make use of this chroot to
build our Pelican website?

Chroot usage with Pelican

Pelican provides two helpers to build and manage your project: one is a
Makefile, and the other is a Python script called fabfile.py. As I said
before, I'm not really a seasoned Pythonista, but it happens that I'm quite a
fan of make, hence I will focus on the Makefile for this part.

So, here's how your daily blogging workflow might look like, now that
everything is in place.

Open a first terminal, and edit your blog posts with your favorite editor:

$ nano content/bla-bla-bla.md

Then open a second terminal, enter the chroot, build your blog and serve it:

This is easy and neat, but guess what, we can even do better. Open the Makefile
and have a look at the very first lines:

PY?=python3
PELICAN?=pelican

It turns out that the Pelican developers know how to write Makefiles, and they
were kind enough to allow their users to easily override the default commands.
In our case, it means that we can just replace these two lines with these ones:

And after these changes, we can now completely forget about the chroot, and
simply type make html and make serve. The chroot invocation is now handled
automatically in the Makefile. How neat!

Maintenance

So you might want to update your chroot from time to time, and you do that with
apt, like for any Debian system. Remember the distinction between regular
chroot and source chroot due to the overlay? If you want to actually modify
your chroot, what you want is the source chroot. And here's the one-liner:

The general idea of keeping your build environments separated from your host
environment is a very important one if you're a software developer, and
especially if you're doing consulting and working on several projects at the
same time. Installing all the build tools and dependencies directly on your
system can work at the beginning, but it won't get you very far.

schroot is only one of the many tools that exist to address this, and I
think it's Debian specific. Maybe you have never heard of it, as chroots in
general are far from the container hype, even though they have some common
use-cases. schroot has been around for a while, it works great, it's simple,
flexible, what else? Just give it a try!

It's also well integrated with other Debian tools, for example you might use it
through sbuild to build Debian packages (another daily task that is better
done in a dedicated build environment), so I think it's a tool worth knowing if
you're doing some Debian work.

That's about it, in the end it was mostly a post about schroot, hope you liked
it.

This site has been partially supported by NSF Grant 07-08437. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the contributors and do not necessarily reflect the views of the National Science Foundation.