The KOffice Team has announced the release of KOffice version 2.0 Beta 2, the second beta version of the upcoming version 2.0. The goal for the second beta is to show progress made since beta 1, as well as to gather feedback from both users and developers on the new UI and underlying infrastructure. This will allow the team to release a basically usable 2.0 release, demonstrating our vision for the future of the digital office to a larger audience and attract new contributions both in terms of code and ideas for improvements. Since the last beta release a significant set of issues and speed-up fixes have been integrated for all applications and this release shows the shift of focus from new features to bug fixes until 2.0 is released. More information on the full announcement while the release notes tell you how to get it.

I my opinion, one key thing about using an office suite is the availability of a mature collection of templates. Templates in my language (from the users view) for private and business use.
It would be too simple to just say, "use any Ooo-Template you like".
What we (well, at least I need) need is a good manual, how to create templates and a central, well structured soure for high quality Templates.

KOffice is special, and so highly likely not so easy for everyone, who is used to other software. Good templates help to understand the priciples and increase produktivity.

The current theory is to throw RAM, GHz, and multiple cores at the problem of performance that sucks. IMHO, not really the best.

I don't see the issue as being the programing language but rather that Qt/KDE are really a programing language. I don't like C++, but I have no problem using it. I am able to use Qt by reading the Fine Manual. The problem comes with KDE for which there really isn't a Fine Manual.

Yes, it would be nice if KDE and Qt were ported to FORTRAN 2008 which is the most modern commercial language. However, performance might still suck because of the overhead which is inherit in any highlevel language.

So the issue isn't really about using a "real programing" language. To get higher performance you need to use a lower level language. C is such a language but it lacks features to the point that the compiler must optimize the code to use the features of modern microprocessors (it was designed for a PDP11 which didn't have many features).

So with programing languages we are faced with a common problem. De Facto standardization leads to obsolete modalities continuing even though more modern and much better alternatives exist. It is the same with MP3 and MPG formats (both of which suck when compared to more modern formats).

So, I don't really know the answer, but clearly there is an issue in what you have said. For this reason, I designed PL/Fiv when I was in college many years ago. It is a meta-assembler that looks like a highlevel language (NOT OO like Java) when written with the appropriate mnemonics and can be run on a VM. The problem is that writing with such a language is much more work than using a highlevel language. IBM suggested the method of code reuse to address this issue, but it never became popular. Some sort of automated assistance appears to be the possible solution to the problem.

So, we are left with the problem that has not yet been solved. A lot of code is written in C which is an obsolete and poorly designed language (when evaluated by modern standards) or in C++ which has some of the same issues but is a semi-OOL which runs with high overhead resulting in performance that sucks. While using a more modern language might help, the real performance problem is that the toolkit is written in the OOL and that is what causes most of the performance issues. The answer would appear to be to write the toolkit in a meta-assembler. This would greatly improve performance since GUI programs spend over 50% of the time in the toolkit.

The OS is also part of the problem. GUI programs run backwards with the program calling the OS for messages; this leads to blocking of input. Using a RTOS where the program services interupts would greatly improve things -- the same throughput would look like higher performance.

But, I would suggest that you prepare for a long wait -- I wouldn't hold my breath waiting for things to improve.

You seriously underestimate the performance of C++. C++ is now being used for most modern numerical libraries, and it's achieving better performance than C or the very well-optimized old-fashioned Fortran routines.

Dress something up in enough layers, however, and it will be slow. Your perception of C++ as slow presumably has more to do with how it's sometimes used than anything intrinsic to the language.

Blitz++ is good, but no proper C++ and still not faster than Fortran.
MTL is not really code compliant
Boost/UBLAS is the currently only alternative. But: They have an overhead of 2 with small objects (it becomes smaller with larger object size) and developers suggest to use ATLAS bindings to apply good old Fortran libs to UBLAS data objects.

For Eigen we did intensive benchmarking of these libs and we found that:
MTL and Boost::uBlas are very very slow
ATLAS is rather slow, but not as slow as the above
Blitz++ is not at all intended for the same purpose, it handles multi-dimensional arrays (tensors) which most of the other libs don't do. But if you use Blitz++ as a matrix/vector library, it is extremely slow (slower than any other lib we've tried). It is just not designed for this use.

Did you use the recommended build options and optimizations? What I have seen in the past is that Intel and Atlas fairly compete and that Atlas is even faster in some cases. There are many reasons why we have finally selected the "slow" ublas. And even there we have found that for example the supported axpy products are slower than other operations offered by ublas.

Do you also plan to support various types of sparse objects in Eigen2? If so, I could give it a try in the future...

As we say in this benchmarks page, we do our best to make the best use of each library, but if you think we got something wrong, please don't hesitate to tell us. Our mailing list information can be seen on our website eigen.tuxfamily.org.

There is already an experimental Sparse module, supporting various kinds of sparse objects, providing a few algorithms and allowing to use other libs as backends for more algorithms, and fully integrated with the rest of Eigen. It already has encouraging bench results. However it will not be part of the 2.0 release, more likely 2.1.

> However, performance
> might still suck because of the overhead which is inherit in any
> highlevel language.

This is a common misconception and I would like to correct it...

Just because a language is high-level doesn't mean that it incurs any overhead.

Most C++ features don't incur any overhead.

A few C++ features incur an overhead (example: virtual functions) but it is minimal for what it gets you, and equivalent C implementations consist in doing the same thing by hand instead of letting the C++ compiler doing it, and so aren't any faster.

First of all, I didn't say faster, I said as fast. All non-interpreted, non-bytecode, non-VM languages have the exact same 'speed' (so this notion of 'speed' of a language doesn't make any sense). That includes ASM,C,C++,FORTRAN.

I really didn't mean to bring the topic to linear algebra libs, but since you asked for it.... i'm one of the Eigen developers and it's doing far better than ATLAS, and in certain cases it's as fast or faster as Intel's and GOTO's BLAS implementations. See the benchmarks here:http://eigen.tuxfamily.org/index.php?title=Benchmark

But really, it's not even a matter of benchmarking, just looking at the C++ spec you can see that it is designed from the ground up to be zero overhead for the most part and minimal overhead for the remainder, where minimal means that you couldn't gain any more speed by using another programming language. Bjarne Stroustrup sticks to the "you don't pay for what you don't use" principle.

> Just because a language is high-level doesn't mean that it incurs any
> overhead.

What? All highlevel languages have overhead when compared to assembler or a meta-assembler (e.g. Intel PL/M).

The overhead in a OOL should be self evident. Objects must be created, initialized, and destroyed, and all actions preformed on the objects must go through indirection. Other overhead issues are going to depend on how good the compiler is, but there is no perfect compiler; compiled code will always be slower than human written assembler. FORTRAN 2008 will probably be faster than C or C++ but this would require a good compiler (the GNU FORTRAN compiler sucks).

> The overhead in a OOL should be self evident.
> Objects must be created, initialized, and destroyed

This, in itself, doesn't incur any overhead over assembly language.

If creation/destruction of your object does a dynamic memory allocation/free, then the equivalent in assembly language also does. No difference here.

Likewise, if the initialization implies setting the values of some fields, then so does the equivalent in assembly.

What I mean is that there is no inherent cost to a constructor/destructor. It's just a function and in cases when the function is trivial and you want to avoid even the cost of calling it, you just inline it like with any other function, so that cost too goes away.

> all actions preformed on the objects must go through indirection

at low level (once your c++ program is compiled) an object is just a pointer. Fields of that object correspond to small fixed offsets from this pointer, like "ptr+4". That's exactly what you would do if you wrote equivalent ASM code by hand.

If you use a d-pointer (aka "pimpl") in your C++ program, then indeed you add one more level of indirection. However C++ doesn't force you to do that. People (incl. the original Qt devs) discovered this technique long after C++ was designed. It is a means of allowing to isolate implementation from ABI. If you want that, you have to add a level of indirection, no way around that.

> Other overhead issues are going to depend on how good the compiler is,
> but there is no perfect compiler

Sure, the compiler (especially the frontend) becomes increasingly important as you use more and more OO/generic features of C++. However, recent compilers are very clever.

> compiled code will always be slower than human written assembler

A human is able to do a perfect job on a small piece of code, for one platform, sure... not arguing against that. However that's not what I would call "overhead" of compiled _languages_, that's just "imperfection" of _compilers_, and then even as such this problem has almost disappeared; only hard math code remains often poorly optimized by compilers and then you can help within C++ by using simd intrinsics, peeling loops etc.

> FORTRAN 2008 will probably be faster than C or C++

on the basis of what are you saying this....
For a long time people didn't believe that c++ could be as fast as Fortran because of its copy semantics, but modern use of c++98 (expression templates) allows to overcome this and c++ compilers have been handling this nicely for a long time now.

using kword pulled from svn couple of weeks ago ..i noticed a bar on a left side that can be moved around but can not be removed ..this bar looks a lot like the ones you see on photo editing apps ..will there be an option to hide it? or is this option in already? ..there has to be a way to turn off any menu i do not want to see and it found it to be odd that i couldnt turn this one off

...and then dismiss it out of hand, because they fancy themselves superior user-interface designers who are smarter than their users, and believe the phrase "if it ain't broke don't fix it" doesn't apply to GUI design!

dpkg: error processing /var/cache/apt/archives/kpresenter-kde4_1%3a1.9.98.1-0ubuntu1_i386.deb (--unpack):
trying to overwrite `/usr/share/pixmaps/kpresenter.xpm', which is also in package kpresenter-data

What's the recommended course of action? Can the package be updated to fix this problem?

I'm also looking forward to an Office suite that is natively integrated to Linux and especially to KDE. Although UI matters, by far the most important feature KOffice 2.0 should have is 100% compatibility with ODF (v.1.2). I use OpenOffice intensily for personal and academic work, and will only consider moving to KOffice when I am asured of 100% ODF compatibility.

Although UI matters, by far the most important feature KOffice 2.0 should have is 100% compatibility with ODF (v.1.2). I use OpenOffice intensily for personal and academic work, and will only consider moving to KOffice when I am asured of 100% ODF compatibility.

Well since neither OOo nor Koffice implement the spec perfectly you'd better stop using both of them. Never mind though, I hear Microsoft are interested in ODF these days so perhaps you'll be able to use MS Office instead... :-P

As of beta1, I was unable to save flake objects within a document. The text was retained, but not the graphics. One of the developers mentioned that the code for saving was already in place and that it would be implemented in beta2. Can anyone confirm if that is the case? My distro hasn't released packages for beta2 yet.

Ok, OpenSUSE made beta2 available today and I was able to try it out. Sadly, it still does not save flake shapes in the document. It's a little disheartening to see that something so critical is still not working at this point in the beta release cycle. Without it, it's really not possible for me to give it a thorough testing. I continue to have high hopes for KOffice, and I hope this feature makes it into the next beta release.

As of beta1, I was unable to save flake objects within a document. The text was retained, but not the graphics. One of the developers mentioned that the code for saving was already in place and that it would be implemented in beta2. Can anyone confirm if that is the case? My distro hasn't released packages for beta2 yet.

When compared with other word processors, the startup dialog of KWord sucks.

I don't find that the demo templates help at all -- I find them confusing.

IMHO, what is needed is to make it easy for the user to create a default template to be named "Default" so that the user can click through the start up with one click if "Default" is what is wanted.

Perhaps it would be better if using the "Default" template were the default. One issue with this is that the first time KWord was run that the "Default" template would need to be chosen (A4 or US letter) based on the locale.