May 25, 2019

Kube is still alive! I got distracted for a while, both professionally and privately, and writing blog posts is unfortunately always the first thing that ends up on the chopping block. Anyways, lot’s of progress has been made (103 commits in sink, ~130 commits in the kube codebase):

For sink we had a variety of bugfixes and performance improvements, especially on the more recent CalDAV/CardDAV backends.

For CalDAV/CardDAV we now do basic autodiscovery using the .well-known url’s. The DNS part of the spec has not been implemented so far.
This means for a properly set-up server you only have to specify the base url, and everything else will be discovered automatically from there.

On the more user-facing front we have:

Sent emails are now collapsed by default

Plain text is now the preferred method of viewing emails. You can still view the HTML variant if available by clicking a button.

The Addressbook is no longer read-only and you can now create contacts as well.

A visually reworked composer that avoids becoming too wide and removes a lot of the visual clutter.

The calendar can now render recurring-events.

It is now possible to create events as well.

Work on a tasks view has started

Releases

You may have noticed that it’s been a while since the last release. This is not only because releases are additional work, but also because we already have a continuous delivery method with the nightly flatpak.
It’s clear that releases do provide value, both as a communication tool which version should be packaged, and if they would be maintained. With the current manpower we cannot maintain releases though, which makes it significantly less interesting.

With that said, the 0.8 release with the calendar is now long overdue and should be coming out soonish.

Experimental flatpak

Just to put it out there; Additionally to the usual “master” branch of the flatpak, there is also an “experimental” branch, containing, surprise, various experimental bits and pieces.

This currently entails:

A plugin that stores the accounts password encrypted by the accounts gpg key (blindly assuming there is one with a matching email address).

A search view

The upcoming calendar view (which we’ll move over in the next release)

The above todo view (which will take a little longer to move to master)

A “File as expense” plugin (a showcase how we could do extensions in the mail view).

The Inbox crusher view (an experiment for a view to go through your inbox one-by-one).

It typically serves as a staging ground for new components, and is the version that I’m running day-to-day. flatpak makes it easy to switch back and forth between the branches on top of the same dataset, so you can try it and switch back if you don’t like what you see.

To give it a shot use the following command to switch the current flatpak branch:

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to: kube-project.com

Although kdesrc-build supports building Qt5 just like the KDE Frameworks, I prefer to use the prebuilt Qt binaries from qt.io for testing against alpha and beta releases.
This saves me a lot of time on my machine, which is not powerful enough to compile the full Qt in a reasonable time.

To reproduce my setup, an additional step is required compared to a kdesrc-build setup that uses the system Qt, nevertheless you start as usual, by cloning kdesrc-build itself:

git clone https://invent.kde.org/kde/kdesrc-build

Since kdesrc-build should be available anywhere on the system, not just in its own directory, we add it to the PATH variable by appending export PATH=~/kde/src/kdesrc-build:$PATH to your .profile file.
Please remember to adapt the path according to where you cloned kdesrc-build to.

This step created all the required configuration files, which now need slight adaptions in order to work with the Qt we downloaded from qt.io in the beginning.

Edit ~/.kdesrc-buildrc, and replace the path to Qt qtdir with the path you installed Qt to.
The line should then look similar to this:

qtdir ~/.local/Qt/5.13.0/gcc_64 # Where to find Qt5

Now as a final step we need to prevent kdesrc-build from trying to build its own Qt, which can be done by commenting one include line:
include /home/jbb/kde/kdesrc-build/qt5-build-include, so it looks like #include /home/jbb/kde/kdesrc-build/qt5-build-include.

This should be it! Have fun compiling up to date KDE software using an up to date Qt without compiling Qt for hours :)

If it is just a small change and you don’t want to spend time on Phabricator, attaching a git diff versus current master to the bug is ok, too.
Best mark the bug with a [PATCH] prefix in the subject.

The team working on the code is small, therefore please be a bit patient if you wait for reactions.
I hope we have improved our reaction time in the last months but we still are lacking in that respect.

May 23, 2019

Yeah, so would I. Who wouldn't. C++ is notorious for taking its sweet time to get compiled. I never really cared about PCHs when I worked on KDE, I think I might have tried them once for something and it didn't seem to do a thing. In 2012, while working on LibreOffice, I noticed its build system used to have PCH support, but it had been nuked, with the usual poor OOo/LO style of a commit message stating the obvious (what) without bothering to state the useful (why). For whatever reason, that caught my attention, reportedly PCHs saved a lot of build time with MSVC, so I tried it and it did. And me having brought the PCH support back from the graveyard means that e.g. the Calc module does not take 5:30m to build on a (very) powerful machine, but only 1:45m. That's only one third of the time.

In line with my previous experience, on Linux that did nothing. I made the build system support also PCH with GCC and Clang, because it was there and it was simple to support it too, but there was no point. I don't think anybody has ever used that for real.

Then, about a year ago, I happened to be working on a relatively small C++ project that used some kind of an obscure build system called Premake I had never heard of before. While fixing something in it I noticed it also had PCH support, so guess what, I of course enabled it for the project. It again made the project build faster on Windows. And, on Linux, it did too. Color me surprised.

The idea must have stuck with me, because a couple weeks back I got the idea to look at LO's PCH support again and see if it can be made to improve things. See, the point is, PCHs for that small project were rather small, it just included all the std stuff like <vector> and <string>, which seemed like it shouldn't make much of a difference, but it did. Those standard C++ headers aren't exactly small or simple. So I thought that maybe if LO on Linux used PCHs just for those, it would also make a difference. And it does. It's not breath-taking, but passing --enable-pch=system to configure reduces Calc module build time from 17:15m to 15:15m (that's a less powerful machine than the Windows one). Adding LO base headers containing stuff like OUString makes it go down to 13:44m and adding more LO headers except for Calc's own leads to 12:50m. And, adding even Calc's headers, results in 15:15m again. WTH?

It turns out, there's some limit when PCHs stop making things faster and either don't change anything, or even make things worse. Trying with the Math module, --enable-pch=system and then --enable-pch=base again improve things in a similar fashion, and then --enable-pch=normal or --enable-pch=full just doesn't do a thing. Where it that 2/3 time reduction --enable-pch=full does with MSVC?

Clang has recently received a new option, -ftime-trace, which shows in a really nice and simple way where the compiler spends the time (take that, -ftime-report). And since things related to performance simply do catch my attention, I ended up building the latest unstable Clang just to see what it does. And it does:

So, this is bcaslots.cxx, a smaller .cxx file in Calc. The first graph is without PCH, the second one is with --enable-pch=base, the third one is --enable-pch=full. This exactly confirms what I can see. Making the PCH bigger should result in something like the 4th graph, as it does with MSVC, but it results in things actually taking longer. And it can be seen why. The compiler does spend less and less time parsing the code, so the PCH works, but it spends more time in this 'PerformPendingInstantiations', which is handling templates. So, yeah, in case you've been living under a rock, templates make compiling C++ slow. Every C++ developer feeling really proud about themselves after having written a complicated template, raise your hand (... that includes me too, so let's put them back down, typing with one hand is not much fun). The bigger the PCH the more headers each C++ file ends up including, so it ends up having to cope with more templates. With the largest PCH, the compiler needs to spend only one second parsing code, but then it spends 3 seconds sorting out all kinds of templates, most of which the small source file does not need.

This one is column2.cxx, a larger .cxx file in Calc. Here, the biggest PCH mode leads to some improvement, because this file includes pretty much everything under the sun and then some more, so less parsing makes some savings, while the compiler has to deal with a load of templates again, PCH or not. And again, one second for parsing code, 4 seconds for templates. And, if you look carefully, 4 seconds more to generate code, most of it for those templates. And after the compiler spends all this time on templates in all the source files, it gets all passed to the linker, which will shrug and then throw most of it away (and that will too take a load of time, if you still happen to use the BFD linker instead of gold/lld with -gsplit-dwarf -Wl,--gdb-index). What a marvel.

Now, in case there seems to be something fishy about the graphs, the last graph indeed isn't from MSVC (after all, its reporting options are as "useful" as -ftime-report). It is from Clang. I still know how to do performance magic ...

A few months ago, we received a phone call from a bioinformatics group at a European university. The problem they were having appeared very simple. They wanted to know how to usemmap() to be able to load a large data set into RAM at once. OK I thought, no problem, I can handle that one. Turns out this has grown into a complex and interesting exercise in profiling and threading.

The background is that they are performing Markov-Chain Monte Carlo simulations by sampling at random from data sets containing SNP (pronounced “snips”) genetic markers for a selection of people. It boils down to a large 2D matrix of floats where each column corresponds to an SNP and each row to a person. They provided some small and medium sized data sets for me to test with, but their full data set consists of 500,000 people with 38 million SNP genetic markers!

The analysis involves selecting a column (SNP) at random in the data set and then performing some computations on the data for all of the individuals and collecting some summary statistics. Do that for all of the columns in the data set, and then repeat for a large number of iterations. This allows you to approximate the underlying true distribution from the discreet data that has been collected.

That’s the 10,000 ft view of the problem, so what was actually involved? Well we undertook a bit of an adventure and learned some interesting stuff along the way, hence this blog series.

The stages we went through were:

Preprocessing

Loading the Data

Fine-grained Threading

Preprocessing Reprise

Coarse Threading

In this blog, I’ll detail stages 1 and 2. The rest of the process will be revealed as the blog series unfolds, and I’ll include a final summary at the end.

1. Preprocessing

The first thing we noticed when looking at the code they already had is that there is quite some work being done when reading in the data for each column. They do some summary statistics on the column, then scale and bias all the data points in that column such that the mean is zero. Bearing in mind that each column will be processed many times, (typically 10k – 1 million), this is wasteful to repeat every time the column is used.

So, reusing some general advice from 3D graphics, we moved this work further up the pipeline to a preprocessing step. The SNP data is actually stored in a compressed form which takes the form of quantizing 4 SNP values into a few bytes which we then decompress when loading. So the preprocessing step does the decompression of SNP data, calculates the summary statistics, adjusts the data and then writes the floats out to disk in the form of a ppbed file (preprocessed bed where bed is a standard format used for this kind of data).

The upside is that we avoid all of this work on every iteration of the Monte Carlo simulation at runtime. The downside is that 1 float per SNP per person adds up to a hell of a lot of data for the larger data sets! In fact, for the full data set it’s just shy of 69 TB of floating point data! But to get things going, we were just worrying about smaller subsets. We will return to this later.

2. Loading the data

Even on moderately sized data sets, loading the entirety of the data set into physical RAM at once is a no-go as it will soon exhaust even the beefiest of machines. They have some 40 core, many-many-GB-of-RAM machine which was still being exhausted. This is where the original enquiry was aimed – how to use mmap(). Turns out it’s pretty easy as you’d expect. It’s just a case of setting the correct flags so that the kernel doesn’t actually take a copy of the data in the file. Namely, PROT_READ and MAP_SHARED:

When dealing with such large amounts of data, be careful of overflows in temporaries! We had a bug where ppBedSize was overflowing and later causing a segfault.

So, at this point we have a float *ppBed pointing at the start of the huge 2D matrix of floats. That’s all well and good but not very convenient for working with. The code base already made use of Eigen for vector and matrix operations so it would be nice if we could interface with the underlying data using that.

Turns out we can (otherwise I wouldn’t have mentioned it). Eigen provides VectorXf and MatrixXf types for vectors and matrices but these own the underlying data. Luckily Eigen also provides a wrapper around these in the form of Map. Given our pointer to the raw float data which is mmap()‘d, we can use the placement new operator to wrap it up for Eigen like so:

At this point we can now do operations on the mappedZ matrix and they will operate on the huge data file which will be paged in by the kernel as needed. We never need to write back to this data so we didn’t need the PROT_WRITE flag for mmap.

Yay! Original problem solved and we’ve saved a bunch of work at runtime by preprocessing. But there’s a catch! It’s still slow. See the next blog in the series for how we solved this.

May 22, 2019

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.

We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).

We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

I am happy to announce the release of 0.4.0 version of the Elisa music player.

Improved Grid Views Elements

Nate Graham has reworked the grid elements (especially visible with the albums view).

I must confess that I was a bit uneasy with this change (it was a part mostly unchanged since the early versions). I am now very happy about this change.

BeforeAfter

Getting Involved

I would like to thank everyone who contributed to the development of Elisa, including code contributions, testing, and bug reporting and triaging. Without all of you, I would have stopped working on this project.

New features and fixes are already being worked on. If you enjoy using Elisa, please consider becoming a contributor yourself. We are happy to get any kind of contributions!

We have some tasks that would be perfect junior jobs. They are a perfect way to start contributing to Elisa. There are more not yet reported here but reported in bugs.kde.org.

Elisa source code tarball is available here. There is no Windows setup. There is currently a blocking problem with it (no icons) that is being investigated. I hope to be able to provide installers for later bugfix versions.

The phone/tablet port project could easily use some help to build an optimized interface on top of Kirigami. It remains to be seen how to handle this related to the current desktop UI.

KDAB has released a new version of KDSoap. This is version 1.8.0 and comes more than one year since the last release (1.7.0).

KDSoap is a tool for creating client applications for web services without the need for any further component such as a dedicated web server.

KDSoap lets you interact with applications which have APIs that can be exported as SOAP objects. The web service then provides a machine-accessible interface to its functionality via HTTP. Find out more...

Server-side

New method KDSoapServerObjectInterface::additionalHttpResponseHeaderItems to let server objects return additional http headers. This can be used to implement support for CORS, using KDSoapServerCustomVerbRequestInterface to implement OPTIONS response, with “Access-Control-Allow-Origin” in the headers of the response (github issue #117).

Stopped generation of two job classes with the same name, when two bindings have the same operation name. Prefixed one of them with the binding name (github issue #139 part 1)

Prepended this-> in method class to avoid compilation error when the variable and the method have the same name (github issue #139 part 2)

WSDL parser / code generator changes, applying to both client and server side

Source incompatible change: all deserialize() functions now require a KDSoapValue instead of a QVariant. If you use a deserialize(QVariant) function, you need to port your code to use KDSoapValue::setValue(QVariant) before deserialize()

Source incompatible change: all serialize() functions now return a KDSoapValue instead of a QVariant. If you use a QVariant serialize() function, you need to port your code to use QVariant KDSoapValue::value() after serialize()

Source incompatible change: xs:QName is now represented by KDQName instead of QString, which allows the namespace to be extracted. The old behaviour is available via KDQName::qname().

Fixed double-handling of empty elements

Fixed fault elements being generated in the wrong namespace, must be SOAP-ENV:Fault (github issue #81).

Added import-path argument for setting the local path to get (otherwise downloaded) files from.

Added -help-on-missing option to kdwsdl2cpp to display extra help on missing types.

May 21, 2019

I’m very excited to start off the Google Summer of Code blogging experience regarding the project I’m doing with my KDE mentors David Edmundson and Nate Graham. What we’ll be trying to achieve this summer is have SDDM be more in sync with the Plasma desktop. What does that mean? The essence of the problem...... Continue Reading →

May 20, 2019

If you occassionally do performance profiling as I do, you probably know Valgrind's Callgrind and the related UI KCachegrind. While Callgrind is a pretty powerful tool, running it takes quite a while (not exactly fun to do with something as big as e.g. LibreOffice).

Recently I finally gave Linux perf a try. Not quite sure why I didn't use it before, IIRC when I tried it somewhen long ago, it was probably difficult to set up or something. Using perf record has very little overhead, but I wasn't exactly thrilled by perf report. I mean, it's text UI, and it just gives a list of functions, so if I want to see anything close to a call graph, I have to manually expand one function, expand another function inside it, expand yet another function inside that, and so on. Not that it wouldn't work, but compared to just looking at what KCachegrind shows and seeing ...

When figuring out how to use perf, while watching a talk from Milian Wolff, on one slide I noticed a mention of a Callgrind script. Of course I had to try it. It was a bit slow, but hey, I could finally look at perf results without feeling like that's an effort. Well, and then I improved the part of the script that was slow, so I guess I've just put the effort elsewhere :).

And I thought this little script might be useful for others. After mailing Milian, it turns out he just created the script as a proof of concept and wasn't interested in it anymore, instead developing Hotspot as UI for perf. Fair enough, but I think I still prefer KCachegrind, I'm used to this, and I don't have to switch the UI when switching between perf and callgrind. So, with his agreement, I've submitted the script to KCachegrind. If you would find it useful, just download this do something like:

Plasma 5.16 beta was released last week and there’s now a further couple of weeks to test it to find and fix all the beasties. To help out download the Neon Testing image and install it in a virtual machine or on your raw hardware. You probably want to do a full-upgrade to make sure you have the latest builds. Then try out the new notifications system, or the new animated wallpaper settings or anything else mentioned in the release announcement. When you find a problem report it on bugs.kde.org and/or chat on the Plasma Matrix room. Thanks for your help!

I have been at many software events and have helped or have been part of the organization in a few of them. Based on that experience and the fact that I have participated in the last two editions, let me tell you that J On The Beachis a great event.

The main factors that leads me to such a conclusion are:

It is all about contents. I have seen many events that, over time, loose the focus on the quality of the contents. It is a hard focus to keep, specially as you grow. @JOTB19 had great content: well delivered talks and workshops, performed by bright people with something to say which was relevant to the audience.

I think the event has not reached its limit yet, specially when it comes to workshops.

Designing the content structure to target the right audience is as important as bringing speakers with great things to say. As any event matures, tough decisions will need to be taken in order to find its own space and identity among outstanding competitors.

When it comes to themes, will J On The Beach keep targeting several topics, or will it narrow them to one or two? Will they always be the same or will they rotate?

When it comes to size, will it grow or will it remain in the current numbers? Will the price increase or will be kept in the current range?

When it comes to contents, will the event focus more energy and time allocation on the “hands on” learning sessions or will workshops be kept as relevant compared to the talks, as they are today? Will the talks length be reduced? Will we see lightning talks?

J On The Beach was well organised. A good organization is not the one that does not run into any trouble but the one that handles them smoothly so there is little or no perceived impact. This event has a diligent team behind it, based on the little/no impact I perceived.

Support from local companies. As Málaga matures as software hub, more and more companies arrive to this area expecting to grow in size, so the need to attract local talent grows in parallel.

Some of these foreign companies understand how important it is to show up in local events to be known by as many local developers as possible. J On The Beach has captured the attention of several of these companies.

The organizers have understood this reality and support them to use the event to openly recruit people. This symbiotic relation is a very productive one from what I have witnessed.

It is a hard relation to sustain though, specially if the event does not grow is size, so probably in the future the current relation will need to add additional common interests to remain productive for both sides.

Global by default. Most events in Spain have traditionally been designed for Spaniards first, turning into more global events as they grow. J On The Beach is global by default, by design, since day 1. It is harder to succeed that way, but beyond the activation point it turns out to be easier to become sustainable. The organizers took the risk and have reached that point already, which provides the event a bright future in my opinion.

The fact that the event is able to attract developers from many countries, specially from eastern European ones, makes J On The Beach very attractive to foreign companies already located in Málaga, from the recruitment perspective. Málaga is a great place not just to work in English but also to live in English. There are well established communities from many different countries in the metropolitan area, due to how strong the touristic industry is here. These factors, together with others like logistics, affordable living costs, good public health care system, sunny weather, availability of international and multilingual schools, etc., reduce the adaptation effort when relocating, specially for developer’s families. J On The Beach brings tasty fishes to the pond.

Let me name a couple of points that can make the event even better:

It is very hard to find a venue that fits any event during its consolidation phase and evolves with it. This edition’s venue represents a significant improvement compared to last year edition. There is room for improvement though.

It would be ideal to find a place in Málaga itself, closer to where the companies are located and places to hang out after the event, which at the same time, keep the good things the current venue/location provides, which are plenty.

Finding the right venue is tough. There are decision-making factors that participants do not usually see but are essential like costs, how supportive the venue staff and owners are, accommodation availability in the surrounding area, availability on the selected dates, etc. It is one of the most difficult points to get right, in my experience.

Great events deserve great keynote speakers. They are hard to get but often reflects the difference between great and must-attend events.

Great keynote speakers does not necessarily mean popular ones. I see already celebrities in bigger and more expensive events. I would love to see in Málaga old time computer science cowboys. I mean those first class engineers who did something relevant some time ago and have witnessed the evolution of our industry and their own inventions. They are able to bring a perspective that very few can provide, extremely valuable in these fast pace changing times. Those gems are harder to see at big/popular events and might be a good target for a smaller, high quality event. I think that it would be a great sign of success if such a kind of professionals come to talk at J On The Beach.

I am very glad there is such a great event close to where I live. J On The Beach is not just worth for local developers but also for those abroad. I attend to several events in other countries every year with more name but less value than J On The Beach. It will definitely be on my 2020 agenda. Thanks to every person involved in making it possible.

May 19, 2019

Lacking VLC and libvlc in Craft, phonon-vlc cannot be built successfully on macOS. It caused the failed building of KDE Connect in Craft.

As a small step of my GSoC project, I managed to build KDE Connect by removing the phonon-vlc dependency. But it’s not a good solution. I should try to fix phonon-vlc building on macOS. So during the community bonding period, to know better the community and some important tools in the Community, I tried to fix phonon-vlc.

Fixing phonon-vlc

At first, I installed libVLC in MacPorts. All Header files and libraries are installed into the system path. So theoretically, there should not be a problem of the building of phonon-vlc. But an error occurred:

We can figure that the compiling is ok, the error is just at the end, during the linking. The error message tells us there is no QtDBus lib. So to fix it, I made a small patch to add QtDBus manually in the CMakeLists file.

To use it, run craft --add-blueprint-repository https://github.com/inokinoki/craft-blueprints-inoki.git and the blueprint(s) will be added into your local blueprint directory.

Then, craft binary/vlc will help get the vlc binary and install Header files, libraries into Craft include path and lib path. Finally, you can build what you want with libvlc dependency.

Conclusion

Up to now, KDE Connect is using QtMultimedia rather than phonon and phonon-vlc to play a sound. But this work could be also useful for other applications or libraries who depend on phonon, phonon-vlc or vlc. This small step may help build them successfully on macOS.

I’m Weixuan XIAO, with the nickname: Inoki, sometimes Inokinoki is used to avoid duplicated username.

I’m glad to be selected in Google Summer of Code 2019 to work for KDE Community to make KDE Connect work on macOS. And I’m willing to be a long-term contributor in KDE Community.

As a Chinese student, I’m studying in France for my engineering degree. At the same time, I’m waiting for my bachelor degree at Shanghai University.

I major in Real-Time System and Embedded Engineering. With strong interests in Operating System and Computer Architecture, I like playing with small devices like Arduino and Raspberry Pi, different systems like macOS and Linux(especially Manjaro with KDE, they are the best partner).

Japanese culture makes me crazy, for example, the animation and the game. Even my nickname is actually the pronunciation of my real name in Japanese. So if all of these is the choice of Steins Gate, I’ll normally accept them :)

I speak Chinese, French, English, and a little Japanese. But I realize that my English is awful. So if I make any mistake, please tell me. This would improve my English and I will appreciate it.

Currently it is supported only for PDF documents (and poppler version ≥ 0.72), but that will change soon — thanks to another change by Tobias Deiminger under review to extend the functionality for other documents supported by Okular.

Recently I have been researching into possibilities to make members of KoShape copy-on-write. At first glance, it seems enough to declare d-pointers as some subclass of QSharedDataPointer (see Qt’s implicit sharing) and then replace pointers with instances. However, there remain a number of problems to be solved, one of them being polymorphism.

polymorphism and value semantics

In the definition of KoShapePrivate class, the member fill is stored as a QSharedPointer:

QSharedPointer<KoShapeBackground> fill;

There are a number of subclasses of KoShapeBackground, including KoColorBackground, KoGradientBackground, to name just a few. We cannot store an instance of KoShapeBackground directly since we want polymorphism. But, well, making KoShapeBackground copy-on-write seems to have nothing to do with whether we store it as a pointer or instance. So let’s just put it here – I will come back to this question at the end of this post.

d-pointers and QSharedData

The KoShapeBackground heirarchy (similar to the KoShape one) uses derived d-pointersfor storing private data. To make things easier, I will here use a small example to elaborate on its use.

The main goal of making DerivedPrivate a subclass of AbstractPrivate is to avoid multiple d-pointers in the structure. Note that there are constructors taking a reference to the private data object. These are to make it possible for a Derived object to use the samed-pointer as its Abstract parent. The Q_D() macro is used to convert the d_ptr, which is a pointer to AbstractPrivate to another pointer, named d, of some of its descendent type; here, it is a DerivedPrivate. It is used together with the Q_DECLARE_PRIVATE() macro in the class definition and has a rather complicated implementation in the Qt headers. But for simplicity, it does not hurt for now to understand it as the following:

It turns out that Q_D will call d_func() which then calls an overload of qGetPtrHelper() that takes const Ptr &ptr. What does ptr.operator->() return? What is the difference between QScopedPointer and QSharedDataPointer here?

QScopedPointer‘s operator->() is a const method that returns a non-const pointer to T; however, QSharedDataPointer has two operator->()s, one being const T* operator->() const, the other T* operator->(), and theyhave quite different behaviours – the non-const variant calls detach() (where copy-on-write is implemented), but the other one does not.

qGetPtrHelper() here can only take d_ptr as a const QSharedDataPointer, not a non-const one; so, no matter which d_func() we are calling, we can only get a const AbstractPrivate *. That is just the problem here.

To resolve this problem, let’s replace the Q_D macros with the ones we define ourselves:

We will then use SHARED_D(Class) in place of Q_D(Class) and CONST_SHARED_D(Class) for Q_D(const Class). Since the const and non-const variant really behaves differently, it should help to differentiate these two uses. Also, delete Q_DECLARE_PRIVATE since we do not need them any more:

… big whoops, what is that random thing there? Well, if we use dynamic_cast in place of reinterpret_cast, the program simply crashes after ins->modifyVar();, indicating that ins‘s d_ptr.data() is not at all a DerivedPrivate.

virtual clones

The detach() method of QSharedDataPointer will by default create an instance of AbstractPrivate regardless of what the instance really is. Fortunately, it is possible to change that behaviour through specifying the clone() method.

First, we need to make a virtual function in AbstractPrivate class:

virtual AbstractPrivate *clone() const = 0;

(make it pure virtual just to force all subclasses to re-implement it; if your base class is not abstract you probably want to implement the clone() method) and then override it in DerivedPrivate:

Well, another problem comes, that the constructor for ins2 seems too ugly, and messy. We could, like the private classes, implement a virtual function clone() for these kinds of things, but it is still not gentle enough, and we cannot use a default copy constructor for any class that contains such QScopedPointers.

What about QSharedPointer that is copy-constructable? Well, then these copies actually point to the same data structures and no copy-on-write is performed at all. This still not wanted.

This class allows you to use Descendent<T> (read as “descendent of T“) to represent any instance of any subclass of T. It is copy-constructable, move-constructable, copy-assignable, and move-assignable.

It gives just the same results as before, but much neater and nicer – How does it work?

First we define a class concept. We put here what we want our instance to satisfy. We would like to access it as const and non-const, and to clone it as-is. Then we define a template class model<U> where U is a subclass of T, and implement these functionalities.

Next, we store a unique_ptr<concept>. The reason for not using QScopedPointer is QScopedPointer is not movable, but movability is a feature we actually will want (in sink arguments and return values).

Finally it’s just the constructor, moving and copying operations, and ways to access the wrapped object.

When Descendent<Abstract> ins2 = ins; is called, we will go through the copy constructor of Descendent:

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}

which will then call ins.m_d->clone(). But remember that ins.m_d actually contains a pointer to model<Derived>, whose clone() is return make_unique<model<Derived> >(Derived(instance));. This expression will call the copy constructor of Derived, then make a unique_ptr<model<Derived> >, which calls the constructor of model<Derived>:

model(Derived x) : instance(move(x)) {}

which move-constructs instance. Finally the unique_ptr<model<Derived> > is implicitly converted to unique_ptr<concept>, as per the conversion rule. “If T is a derived class of some base B, then std::unique_ptr<T> is implicitly convertible to std::unique_ptr<B>.”

Hot on the heels of last week, this week’s Usability & Productivity report continues to overflow with awesomeness. Quite a lot of work what you see featured here is already available to test out in the Plasma 5.16 beta, too! But why stop? Here’s more:

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Hi! I’m Akhil K Gangadharan and I’ve been selected for GSoC this year with Kdenlive. My project is titled ‘Revamping the Titler Tool’ and my work for this summer aims to kickoff the complete revamp of one of the major tools used in video-editing in Kdenlive, called the Titler tool.

Titler Tool?

The Titler tool is used to create, you guessed it, title clips. Title clips are clips that contain text and images that can be composited over videos.

The Titler tool

Why revamp it?

In Kdenlive, the titler tool is implemented using QGraphicsView which is considered deprecated since the release of Qt5. This makes it obviously prone to bugs that may appear in the upstream to affect the functionality of the tool. This has caused issues in the past, popular features like the Typewriter effect had to be dropped because of QGraphicsView which lead to uncontrollable crashes.

How?

Using QML.

Currently the Titler Tool uses QPainter, which paints every property and every animation is required to be programmed. QML allows creating powerful animations easily as QML as a language is designed for designing UI, which can be then rendered to create title clips as per our need.

Implementation details - a brief overview

For the summer, I intend to complete work on the backend implementation. The first step is to write and test a complete MLT producer module which can render QML frames. And then to begin test integration of this module with a new titler tool.

This is how the backend currently looks like -

After the revamp, the backend would look like -

After the backend is done with, we begin integrating it with Kdenlive and evolve the titler to use the new backend.

A great long challenge lies ahead, and I’m looking forward to this summer and beyond with the community to complete writing the tool - right from the backend to the new UI.

Finally, a big thanks to the Kdenlive community for getting me here and to my college student community, FOSS@Amrita for all the support and love!

* General tests:– Does plasma desktop start as normal with no apparent regressions over 5.15.5?– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

May 17, 2019

Thanks to Nick Richards we've been able to convince flathub to momentarily accept our old appdata files as still valid, it's a stopgap workaround, but at least gives us some breathing time. So the updates are coming in as we speak.

I tried Latte Dock for a week or so, in order to see if this great piece of software can improve my desktop experience. Here is what I think about.

A week with Latte Dock

Latte Dock is a dock that provides multiple visual effects in order to improve the experience with icons and applications. I’ve already written about it, mostly in a negative way and not because of the software nor its quality, but because I don’t see too much excitement around another OSX style dock bar. However, in order to let me give a more accurate review about the experience with Latte, I decided to try it forcing to use it as the only one dock on my main computer.

I don’t know much about the history and goals of the project, I suspect it is aimed to be a more elegant dock than the default plasma one is, and is of course inspired to the OS X dock that provides parabolic zooms and removes the application tray. It is worth noting that there are multiple layouts out there, each one adding one or more features to customize the appearence of the dock, for example adding a top bar or changing the bar layout and size. I’d rather used the out-of-the-box layout in order to get a more unbiased impression.

My Plasma Dock

My plasma dock was pretty much simple and is composed by (from left to right):

the plasma menu;

the switch desktop applet;

the application tray;

a deck of my main applications (four);

the plasma dashboard icon;

the notification area;

the (digital) clock;

the logout applet.

I tend to keep my panel always laid out the same on all my computers, so that my eyes...

In this release, many aspects of Plasma have been polished and
rewritten to provide high consistency and bring new features. There is a completely rewritten notification system supporting Do Not Disturb mode, more intelligent history with grouping, critical notifications in fullscreen apps, improved notifications for file transfer jobs, a much more usable System Settings page to configure everything, and many other things. The System and
Widget Settings have been refined and worked on by porting code to
newer Kirigami and Qt technologies and polishing the user interface.
And of course the VDG and Plasma team effort towards Usability & Productivity
goal continues, getting feedback on all the papercuts in our software that make your life less
smooth and fixing them to ensure an intuitive and consistent workflow for your
daily use.

For the first time, the default wallpaper of Plasma 5.16 will
be decided by a contest where everyone can participate and submit art. The
winner will receive a Slimbook One v2 computer, an eco-friendly, compact
machine, measuring only 12.4 x 12.8 x 3.7 cm. It comes with an i5 processor, 8
GB of RAM, and is capable of outputting video in glorious 4K. Naturally, your
One will come decked out with the upcoming KDE Plasma 5.16 desktop, your
spectacular wallpaper, and a bunch of other great software made by KDE. You can find
more information and submitted work on the competition wiki
page, and you can submit your own wallpaper in the
subforum.

Desktop Management

New Notifications

Theme Engine Fixes for Clock Hands!

Panel Editing Offers Alternatives

Login Screen Theme Improved

Completely rewritten notification system supporting Do Not Disturb mode, more intelligent history with grouping, critical notifications in fullscreen apps, improved notifications for file transfer jobs, a much more usable System Settings page to configure everything, and more!

Plasma themes are now correctly applied to panels when selecting a new theme.

All widget configuration settings have been modernized and now feature an improved UI. The Color Picker widget also improved, now allowing dragging colors from the plasmoid to text editors, palette of photo editors, etc.

The look and feel of lock, login and logout screen have been improved with new icons, labels, hover behavior, login button layout and more.

When an app is recording audio, a microphone icon will now appear in the System Tray which allows for changing and muting the volume using mouse middle click and wheel. The Show Desktop icon is now also present in the panel by default.

The Wallpaper Slideshow settings window now displays the images in the selected folders, and allows selecting and deselecting them.

The Task Manager features better organized context menus and can now be configured to move a window from a different virtual desktop to the current one on middle click.

The default Breeze window and menu shadow color are back to being pure black, which improves visibility of many things especially when using a dark color scheme.

The "Show Alternatives..." button is now visible in panel edit mode, use it to quickly change widgets to similar alternatives.

Plasma Vaults can now be locked and unlocked directly from Dolphin.

Settings

Color Scheme

Application Style and Appearance Settings

There has been a general polish in all pages; the entire Appearance section has been refined, the Look and Feel page has moved to the top level, and improved icons have been added in many pages.

The Color Scheme and Window Decorations pages have been redesigned with a more consistent grid view. The Color Scheme page now supports filtering by light and dark themes, drag and drop to install themes, undo deletion and double click to apply.

The theme preview of the Login Screen page has been overhauled.

The Desktop Session page now features a "Reboot to UEFI Setup" option.

There is now full support for configuring touchpads using the Libinput driver on X11.

Window Management

Window Management

Initial support for using Wayland with proprietary Nvidia drivers has been added. When using Qt 5.13 with this driver, graphics are also no longer distorted after waking the computer from sleep.

Wayland now features drag and drop between XWayland and Wayland native windows.

Also on Wayland, the System Settings Libinput touchpad page now allows you to configure the click method, switching between "areas" or "clickfinger".

KWin's blur effect now looks more natural and correct to the human eye by not unnecessary darkening the area between blurred colors.

Two new default shortcuts have been added: Meta+L can now be used by default to lock the screen and Meta+D can be used to show and hide the desktop.

GTK windows now apply correct active and inactive colour scheme.

Plasma Network Manager

Plasma Network Manager with Wireguard

The Networks widget is now faster to refresh Wi-Fi networks and more reliable at doing so. It also has a button to display a search field to help you find a particular network from among the available choices. Right-clicking on any network will expose a "Configure…" action.

WireGuard is now compatible with NetworkManager 1.16.

One Time Password (OTP) support in Openconnect VPN plugin has been added.

Discover

Updates in Discover

In Discover's Update page, apps and packages now have distinct "downloading" and "installing" sections. When an item has finished installing, it disappears from the view.

Tasks completion indicator now looks better by using a real progress bar. Discover now also displays a busy indicator when checking for updates.

Improved support and reliability for AppImages and other apps that come from store.kde.org.

Discover now allows you to force quit when installation or update operations are proceeding.

The sources menu now shows the version number for each different source for that app.

It’s been a great pleasure to be chosen to work with KDE during GSoC this year. I’ll be working on KIOFuse and hopefully by the end of the coding period it will be well integrated with KIO itself. Development will mainly by coordinated on the #kde-fm channel (IRC Nick: feverfew) with fortnightly updates on my blog so feel free to pop by! Here’s a small snippet of my proposal to give everyone an idea of what I’ll be working on:

KIOSlaves are a powerful feature within the KIO framework, allowing KIO-aware applicationssuch as Dolphin to interact with services out of the local filesystem over URLs such as fish://and gdrive:/. However, KIO-unaware applications are unable to interact seamlessly with KIOSlaves. For example, editing a file in gdrive:/ in LibreOffice will not save changes to your Google Drive. One potential solution is to make use of FUSE, which is an interface providedby the Linux kernel, which allows userspace processes to provide a filesystem which can bemounted and accessed by regular applications. ​KIOFuse is a project by fvogt thatallows the possibility to mount KIO filesystems in the local system; therefore exposing them toPOSIX-compliant applications such as Firefox and LibreOffice.

This project intends to polish KIOFuse such that it is ready to be a KDE project. In particular,I’ll be focusing on the following four broad goals:• ​Improving compatibility with KDE and non-KDE applications by extending and improving supported filesystem operations.• ​Improving KIO Slave support.• ​Performance and usability improvements.• ​Adding a KDE Daemon module to allow the management of KIOFuse mounts and the translation of KIO URLs to their local path equivalents.

We’re still on track to release Krita 4.2.0 this month! Compared to the alpha release, we fixed over thirty issues. This release also has a fresh splash screen by Tyson Tan and restores Python support to the Linux AppImage. The Linux AppImage does not have support for sound, the macOS build does not have support for G’Mic.

Warning: Linux users should be careful with distribution packages. We have a host of patches for Qt queued up, some of which are important for distributions to carry until the patches are merged and released in a new version of Qt.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

May 15, 2019

The flatpak and flathub developers have changed what they consider a valid appdata file, so our appdata files that were valid last month are not valid anymore and thus we can't build the KDE Applications 19.04.1 packages.

Help testing the Plasma Theme switching now

You like Plasma Themes? You design Plasma Themes even yourself? You want to see switching Plasma Themes working correctly, especially for Plasma panels?

Please get one of the Live images with latest code from the Plasma developers hands (or if you build manually yourself from master branches, last night’s code should be fine) and give the switching of Plasma Themes a good test, so we can be sure things will work as expected on arrival of Plasma 5.16:

If you find glitches, please report them here in the comments, or better on the #plasma IRC channel.

Plasma Themes, to make it more your Plasma

One of the things which makes Plasma so attractive is the officially supported option to customize also the style, and that beyond colors and wallpaper, to allow users to personalize the look to their likes. And designers have picked up on that and did a good set of custom designs (store.kde.org lists at the time of writing 470 themes).

Plasma Theme looking good on first contact with Plasma 5.16

The most annoying pain point has been that on selecting and applying a new Plasma Theme, the theme data was not correctly picked up especially by Plasma panels. Only after a restart of Plasma could the theme be fully experienced. Which made quick testing of themes e.g. from store.kde.org a sad story, given most themes looked a bit broken. And without knowing more one would think it is the theme’s fault.

But some evenings & nights have been spent to hunt down the reasons, and it seems they all have been found and corrected. So when one clicks the “Apply” button, the Plasma Theme instantly styles your Plasma desktop as it should, especially the panels.

And dressing up your desktop to match your day or week or mood or your shirt with one of those partially excellent themes is only a matter of some mouse clicks. No more restart needed

Coming to your system with Plasma 5.16 and KDE Frameworks 5.59 (both to be released in June).

Make your Plasma Theme take advantage of Plasma 5.16

Theme designers, please study the recently added page on techbase.kde.org about Porting Themes to latest Plasma 5. It lists the changes known so far and what to do for them. Please extend the list if something is yet missing.
And tell your theme designer friends about this page, so they can improve their themes as well.

Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.

Add your Blog

If you are a KDE contributor you can have your blog on Planet KDE. Blog content should be mostly KDE themed, English language and not liable to offend. If you have a general blog you may want to set up a tag and subscribe the feed for that tag only to Planet KDE.

We also include feeds in different categories, currently Dot News, Project News feeds, User Blogs, french Language, Spanish Language, Polish Language and Portuguese Language KDE blogs. If you have a feed which falls into these categories (or another non-English language) please file a bug as below.

Planet KDE is kept in KDE's
Git. If you have an account you can add or edit your own feed:

git clone kde:websites/planet-kde-org

Put your hackergotchi in website/hackergotchi/. A hackergotchi should be a photo of your face smaller than 80x80 pixels with a transparent background. git add the file.

At the end of the planetkde/config file add your details (the name in brackets is your IRC nick):

The embedded Twitter feed is made up of KDE contributors in the curated KDE People and Projects Twitter list which is maintained by Jonathan Riddell. To get yourself or your project added ping him or file a bug on the Twitter category.

If you do not have a Git account, file
a bug in Bugzilla listing your name, Git account (if you have one), IRC nick (if you have one), RSS or Atom feed and what you do in KDE. Attach a photo of your face for hackergotchi.

Blog Classes

The default class for blogs is English language personal blogs. Other classes are:

Spanish language:

define_feedclass spanish

Portugese language:

define_feedclass portuguese

Chinese lanugage:

define_feedclass chinese

Polish lanugae:

define_feedclass polish

Italian lanugae:

define_feedclass italian

French lanugae:

define_feedclass french

KDE User blogs:

define_feedclass user

KDE News feeds:

define_feedclass news

KDE Dot News:

define_feedclass dot

Planet KDE Guidelines

Planet KDE is one of the public faces of the KDE project and is read by millions of users and potential contributors. The content aggregated at Planet KDE is the opinions of its authors, but the sum of that content gives an impression of the project. Please keep in mind the following guidelines for your blog content and read the
KDE Code of Conduct. The KDE project reserves the right to remove an inappropriate blog from the Planet. If that happens multiple times, the Community Working Group can be asked to consider what needs to happen to get your blog aggregated again.

If you are unsure or have queries about what is appropriate contact the KDE Community Working Group.

Blogs should be KDE themed

The majority of content in your blog should be about KDE and your work on KDE. Blog posts about personal subjects are also encouraged since Planet KDE is a chance to learn more about the developers behind KDE. However blog feeds should not be entirely personal, if in doubt set up a tag for Planet KDE and subscribe the feed from that tag so you can control what gets posted.

Posts should be constructive

Posts can be positive and promote KDE, they can be constructive and lay out issues which need to be addressed, but blog feeds should not contain useless, destructive and negative material. Constructive criticism is welcome and the occasional rant is understandable, but a feed where every post is critical and negative is unsuitable. This helps to keep KDE overall a happy project.

You must be a KDE contributor

Only have your blog on Planet KDE if you actively contribute to KDE, for example through code, user support, documentation etc.

It must be a personal blog, or in a blog class

Planet KDE is a collection of blogs from KDE contributors.

Do not inflame

KDE covers a wide variety of people and cultures. Profanities, prejudice, lewd comments and content likely to offend are to be avoided. Do not make personal attacks or attacks against other projects on your blog.