I have written a small article that demonstrates the use of KParts. You can find the article here. The tutorial demonstrates the ease with which KParts can be embedded in applications, and discusses their use in KOffice. This article should also be a great way for developers to get up to speed with this powerful KDE technology.

Comments

Nice to see more resources about KParts coming up, this is cool.
Thanks for your continued work on promoting KDE technologies!

Hehe, I like the line count argument for koshell. What isn't known is that I had to add special hooks/code for KOShell in kofficecore ;-)

Another comment: assert()s are very dangerous, don't use them. Especially in such code, where nothing guarantees that the component will exist, will be openable, and will contain what will expect. Always prefer something like "if (!condition) return 0". Not longer, and avoids giving crashes to the unexpecting user ;)

So where will Qt3's component architecture fit into KDE3's KPart framework? Will there be a bridge to connect the two, so for instance, one could use the Opera render in Konqi or embed a HancomSheet document in KWord? I know it may be early in the game to ask this, but I am quite interested in the possibilites....

Hello, Shawn! Speaking of frameworks, how is Korelib coming along? Are you still actively developing it? It seems like a nice lightweight component library for cross-platform stuff. I wish you guys would publish more technical papers about all the wonderful stuff you engineer... anyone know of a good technical writer these guys could hire ;-) ?

Is qt3's solution or kparts better? kparts seems very elegant but I'd think that it'd be better if it could be done within qt with as few dependencies as possible. It'd certainly make KDE more portable.

Should we make KOffice even more "Kparts" (ie: split them up in more components, like KSpellcheck, etc.)?

I'm not a programmer (yet!), but it seems like a very likable idea. You could even download (life) new components you might need, and you would deliver a very flexible solution.

Maybe you could take it even further, and make all of K(DE/Office) write into one XML database (internet-based, of course!). That way you would have .Net (ie: write together into one database on the net with your co-workers), and a very flexible unix-tools-Office-like conglomerate of small tools you could use to edit the data in your central database of "XML" documents. Of course, the data would start by describing witch component should be used to edit/view/print etc it...

KParts is, as I understand is, ideal for this kind of solution. (I dont know enough to say for sure if it would be possible to take it to the central XML-database idea...)

What you say is more or less the way KDE 2 works :-)
We have konsole and the embedded konsole -> the same binary (lib)
We have kate, kwrite, the embedded text viewer in konqy, and AFAIK the upcoming kdevelop will also use the kwrite part -> the same binary (lib)
You can download and install additional ioslaves to access completely new ressources and so on.
I'm not sure right now, but I think even your example (KSpellCheck) already exists :-)
You can view kword texts inside konqy using the same binary as when writing kword documents.
And you can download new parts and install them on the fly, but currently the most parts are in the base packages, so you don't notice it that much.
But this downloading and installing ain't that simple in binary form due to different lib dependencies, compiler versions, cpu's, paths and so on.
Also the author makes it appear a little bit to easy: "use off-the-shelf algorithms". These algorithms have to be glued together, they have to understand each other, they need to have a common GUI, they need to be implemented bug free. Although "text-book", it ain't that simple.

Mentioning kate: I had heard that kate should be the new advanced editor in KDE, replacing kwrite. As my current distribution, containing KDE 2.1.2, still starts kwrite as the advanced editor, I just wonder how the situation really is. Is kate part of the core KDE package?

very beautiful dream indeed, multiple small components, called only when needed. But, sadly, I can't dream on using .Net-like technology. Connection cost is very high here (I can get full lunch with < US$ 0.5, here)

CORBA is for distributed systems, a desktop ain't distributed. The developers had all the problems of working with distributed systems without actually working on a distributed system. CORBA ain't easy to learn (I know from experience), it requires actually an almost entire new way of programming. CORBA has nothing to do with embedding GUI stuff, but it was used in relation to this. And CORBA (MICO) apps takes ages to compile and don't run that fast. The actual CORBA was replaced by DCOP which does now the IPC and is very lightweight, both in terms of speed and memory as in terms of learning how to use it.

> > CORBA ain't easy to learn (I know from experience), it
> > requires actually an almost entire new way of programming.
>
> Then how come so many GNOME developers learned it in such a short time?

Did you try to use CORBA ?
Then you wouldn't ask, I'm sure. I don't think (but I don't know) that there are so many Gnome developers *really* understanding the CORBA stuff.
I don't say they are stupid, I only say what I expect from my own experience with CORBA, and I don't consider myself especially stupid (except sometimes ;-)

> > And CORBA (MICO) apps takes ages to compile and don't run that fast.
>
> That only proofs that MICO is slow. Try compile an ORBit app.

Yes, I didn't say more.

> And if CORBA is slow, how come Berlin, a new windowing system, use CORBA for > > IPC?

Did you ever see Berlin in action ?
I did, it ran on a fast machine (PIII 800 or something like this) and it wasn't fast, it was very capable, but not even on this machine usable fast.
Probably the reason is not CORBA but the fancy graphics stuff, but at least this is no "look at Berlin to see that CORBA is fast" proove.
And of course Berlin needs some networking infrastructure to provide network transparency like nowadays X11 does. KDE doesn't need a network to embed kwrite into konqy.

If you look at the Berlin FAQ, they do admit that CORBA is a lot slower than, for example, KParts. There approach is to use a very high-level API to minimise the communications between the client and the server (e.g. where X would give 20 odd commands to draw a button, Berlin would just give a single "Draw Button" command). They use CORBA to give them the network transparency of X. It's an interesting project, but frankly I think the Unix world really needs an X12 specification to get rid of all the dross that the great people in XFree86 are lumped with.

> Did you ever see Berlin in action ?
> I did, it ran on a fast machine (PIII 800 or something
> like this) and it wasn't fast, it was very capable, but
> not even on this machine usable fast.
> Probably the reason is not CORBA but the fancy graphics
> stuff, but at least this is no "look at Berlin to see that
> CORBA is fast" proove.

According to all those anti-X11 trolls at Slashdot, X is bloated and must die and Berlin is light much faster than X...

I can confirm that CORBA gives programmers a hard time. The inofficial CORBA bible for C++ programmers has more than 1000 pages and it doesn't even cover every aspect. To really understand CORBA, you have to have a lot of time. Most developers rather spend this time on functionality and improvements. CORBA should be used where it is really useful: in highly distributed systems.

In my experience of using CORBA I would rather say it simplifies things for programmers, but as with all techniques you have to learn how to use it first(which is non-trivial unfortunatly.)

I have tried multiple ORB-implementations with different levels of success. But
to say that the CORBA-technique is to blame is probably not correct.

Have a look at TAO which includes a real-time orb.http://www.cs.wustl.edu/~schmidt/TAO.html
One problem with CORBA applications that has been brought up is that they take long time to compile, with this I have too agree. But on the other hand this
is only a problem during the development phase.

CORBA is great, this type of technology might be the future of computing.
Or at least in certain areas of computing. Distributed Computing, Grid computing ...., and associated technologies (CORBA, Globus, .NET, MONO...)
are a very active trends these days. Why? because they address lots of current issues. I am more the Grid Type of person (scientific computing) and I can tell you that this type of technology enabling distributed computing ---- but at the same time hiding the nasty details --- is important ,really important.
It raises the problems of "hiding the details" which is by no means easy: Analysis, Design are tremendously important there, you need to sit down and think about things in a different way than before. Speed is certainly an issue and GUI are certainly very demanding from this point of view. My view is that now it is difficult to create a good distributed framework due to hardware(computer and network) constraints and also software (slow implementation of protocols) constraints.

However if the project fits in (little data to manipulate through the distributed computing system) then you have a winner !
Yes CORBA is a "difficult" technology, partly because it needs good implementation, but also because it needs the programmer to rethink his Design
and the way is developping things.

MICO is not "bad", I use it all the time. It is very complete and that's what I like. I think (correct me if I am wrong) that it is the fullest CORBA/CORBA Services implementation available.

For my project, MICO is doing VERY well because the design and application fits perfectly in the big picture. Now MICO is going also multithread(see http://mico-mt.sourceforge.net) and I am sure that with this more and more people will use it.

Now GUI are another concern and it seems that lightweight ORB are working (ssee GNOME/Bomobo technology). I don't think that Kde or Gnome are right or wrong here: I think that if they are doing their DESIGN well then having distributed components aren't or won't be a problem at all. I am convinced that Design matters a lot.

From the article:
"In Gnome, the component technology is provided by Bonobo, which is
built upon Corba."

and

"In short, a good component technology has the following
characteristics:

* it is easy to enable an application as a component
* it is easy to use a component in another application
* the component is activated quickly (almost instantaneously).

Corba meets none of these requirements, while KParts meets all of them.
Corba is a very good technology but it is definitely not suited for gui
components. Read what follows to see how KPart is a success in this
area."

While Bonobo is based upon CORBA, it provides default implementations
for most stuff, so most of the time, you don't have to implement and
interfaces at all. My example below shows that. However, you can if you
need to.

KPart seems like a glorified shared library loader; there's nothing
wrong with that, in fact, it's a lot easier to implement, and has some
performance advantages (but you can optimize shared library
CORBA-servers to approach virtual function call speeds - in fact works
being done in GNOME's ORB (ORBit) to achieve this).

On the other hand, Bonobo's CORBA-base allows for flexibility which is
not possible with KPart (as far as I can see):

1) out of process components
2) components on other machines (may not be suitable for GUI-components,
but perfectly reasonable for non-GUI-components)

note that 1) and 2) can be done transparently -- client doesn't have to
know where components live.

3) interaction with other languages (e.g. you can use Bonobo from C++,
Perl, Python, Guile,...). It would be hard to write a KParts-component
in Perl, I guess...

Hi Dirk Jan, I have read your tutorial (very well written!) about Bonobo so I am not speaking ignorantely.

I plan a more detailed comparison of Bonobo and KPart in another article (when I'll have time!) but I can at least say a few things:

> On the other hand, Bonobo's CORBA-base allows for flexibility which is
> not possible with KPart (as far as I can see):
> 1) out of process components
> 2) components on other machines (may not be suitable for GUI-components,
> but perfectly reasonable for non-GUI-components)
> 3) interaction with other languages (e.g. you can use Bonobo from C++,
> Perl, Python, Guile,...). It would be hard to write a KParts-component
> in Perl, I guess...
> 4) Allows for independent implementations (e.g. there's a Java-based
> implementation of Bonobo, 'MonkeyBeans').

Except for out-of-process components that can sometimes be very handy, the stuff you described will actually almost never be used.

I would say that 90% of Gnome is written in C but it is probably more than that. Remote components are a cool hack but even you are doubtful about the usefullness of this (quoting your tutorial "there must be someone somewhere who needs this").

Kde has chosen the technology that solved the problem: "I want an application as component". Gnome has chosen the technology that is theorically the top and can do a lot. But in practice 95% of the users won't use the things that made you choose Corba over Shared Libraries. You are forcing them to use a more complicated technology because you want the remaining theorical 5% to be able to use remote embedding.

The comparison that comes to my mind is this famous guy who designed an operating system that would not be unix although it would resemble it a lot. It would be cool, have micro-kernel, use message passing and provide more Freedom for the user. 10 years later, this OS is still struggling to get support and applications. Besides, there is this cool finnish student who wrote a cool hack on x86 that would solve exactely his problem. This cool hack is no more a cool hack but a widely used OS. The fact that it was originally specifically and only designed for x86 doesn't stop it to run on many architecture now.

You are right though that KPart doesn't provide the flexibility of Bonobo. But there is XPart () which provides exactely what you want. It uses X to embed out- of-process applications. So if you have a remote X server with applications written in a different language, on a different workstation/OS, you can embed it with XPart. Still no need for Corba.

XPart is sufficentely generic to make a Bonobo-XPart bridge, or to be used outside KDE (it depends only on X and Qt).

KDE made the good choice: shared libraries and ease of use for direct components. More exotic technology for more exotic needs.

Note that nobody has used XPart yet, nor requested for remote components. IMHO, this is because KPart solves 99% of the needs for components and nobody needs remote components. I will use XPart for kvim because of its out-of-process property.

I have absolutely nothing against Gnome. I simply think the Gnome project has made some wrong technical choices.

> It's extremely simple to create a Bonobo component (control) from a
> regular gtk-widget, in fact I show this in the tutorial (ipentry is a
> gtkwidget I've written as well:
It is true that creating a bonobo component is as simple as creating a KPart.
But what the bonobo tutorial also told me is that, in comparison to KPart:
- it didn't compile on my Mandrake :-)
- your component doesn't add any menu entries and there is no dynamic activation, it is just a static widget
- communicating with the component is painful. You must encapsulate your data in the property bag
- using signal/slots is painful
- you don't have remote scripting

Moreover, I find the component accessing and embedding more difficult. The problem is that for every interaction with the component, you must go through Corba. You don't have this problem with in-process components.

> Hi Dirk Jan, I have read your tutorial (very well written!) about
> Bonobo so I am not speaking ignorantely.

Ok. I'm equally well-educated in KParts, thanks to you article ;-)

> I plan a more detailed comparison of Bonobo and KPart in another
> article (when I'll have time!) but I can at least say a few things:

That would be interesting.

> > On the other hand, Bonobo's CORBA-base allows for flexibility which
> > is not possible with KPart (as far as I can see):
> > 1) out of process components
> > 2) components on other machines (may not be suitable for
> > GUI-components,
> > but perfectly reasonable for non-GUI-components)
> > 3) interaction with other languages (e.g. you can use Bonobo from
> > C++,
> > Perl, Python, Guile,...). It would be hard to write a
> > KParts-component in Perl, I guess...
> > 4) Allows for independent implementations (e.g. there's a Java-based
> > implementation of Bonobo, 'MonkeyBeans').
>
> Except for out-of-process components that can sometimes be very handy,
> the stuff you described will actually almost never be used.
>
> I would say that 90% of Gnome is written in C but it is probably more
> than that. Remote components are a cool hack but even you are doubtful
> about the usefullness of this (quoting your tutorial "there must be
> someone somewhere who needs this").

Sure, GNOME's mostly written in C, but that doesn't really say much
about third-party use. I don't have any numbers here, but I've seen
quite some use of the Gnome / Python combination, which is very nice,
for example for database frontends. So, in the GNOME-world, non-C/C++
languages are more important, and therefore should have Bonobo
support.

> Kde has chosen the technology that solved the problem: "I want an
> application as component". Gnome has chosen the technology that is
> theorically the top and can do a lot. But in practice 95% of the users
> won't use the things that made you choose Corba over Shared
> Libraries. You are forcing them to use a more complicated technology
> because you want the remaining theorical 5% to be able to use remote
> embedding.

Well, the funny thing is that with Bonobo you don't need to deal with
CORBA complexity if you don't want to, but it's there if you need it.

Hmmm.... a CORBA server can be a shared library, and it is possible to
do this without too much overhead (just a bit more than a c++ virtual
function call, but not too much).

Writing CORBA-code (esp. servers) in C is a bit inconvenient, but the
beauty of Bonobo is that you don't have to. Note that there's also a
very nice CORBA binding for Python, which makes writing CORBA servers
/ clients as easy as writing an ActiveX client or server in Visual
Basic.

XParts sounds like reinventing CORBA, without the benefits of using
widely-supported standard.

> XPart is sufficentely generic to make a Bonobo-XPart bridge, or to be
> used outside KDE (it depends only on X and Qt).

The core Bonobo (cvs-versions) doesn't depend on either X or GTK+. But
a bridge with KParts could be an interesting exercise.

> KDE made the good choice: shared libraries and ease of use for direct
> components. More exotic technology for more exotic needs.
>
> Note that nobody has used XPart yet, nor requested for remote
> components. IMHO, this is because KPart solves 99% of the needs for
> components and nobody needs remote components. I will use XPart for
> kvim because of its out-of-process property.

GNOME uses out-of-proc servers a lot, as well as inproc servers. You
could even choose at runtime if you want the inproc version or the one
running on the machine down the hall. Note that for example the Sun
people (which are actively contributing to GNOME) support Bonobo
especially because it uses CORBA, which goes nicely with their
existing infrastructure.

> I have absolutely nothing against Gnome. I simply think the Gnome
> project has made some wrong technical choices.

Gnome made some different choices.

> It is true that creating a bonobo component is as simple as creating a
> KPart. But what the bonobo tutorial also told me is that, in
> comparison to KPart:
> - it didn't compile on my Mandrake :-)

I didn't receive your bugreport. Please send me one.

> - your component doesn't add any menu entries and there is no dynamic
> activation, it is just a static widget

What is 'dynamic invocation'? And you're right I should add some stuff
about adding menu entries... it's quite simple!

> - using signal/slots is painful

You could transport gtk-signals from server to client using a property
bag. I plan to write some wrapper code to make this (almost) the same
as using plain gtk-signals.

> - you don't have remote scripting

Why not? You can use Perl/Python/Guile if you like.

> Moreover, I find the component accessing and embedding more
> difficult. The problem is that for every interaction with the
> component, you must go through Corba. You don't have this problem with
> in-process components.

CORBA servers can be inproc if you like. The way you access them is
exactly the same no matter if they're inproc or outproc.

Anyway, it seems both technologies (Bonobo and KParts) are doing what
they're designed for, and it's only interesting to see what will
happen. I hope we will have some interoperatibility!

It seems to me that making/using bonobo components are easier seeing that no 80 line could has to be used. I understand that those 80 are very general and that certain ide will provide them but 80 lines it still 80 lines- the only time that is good is when one is talking about coke.

2) components on other machines (may not be suitable for GUI-components,
but perfectly reasonable for non-GUI-components)

I've always been curious - how do you handle security issues with multi-system components like this? How can you contain a malicious component which is designed to advertise all services and spread across a server pool or even an entire network?

: What you need is an authentification layer+crypto on top of your framework

How does that prevent one compromised host from spreading through an entire network? And I think that this thread should probably be taken to email, unless any KDEers are interested in theory for potential application to future Qt3/KParts issues.

What a conspiracy! And how awfully Gnome-bashing this article is! Evil, evil word!

Hey guys, the KDE developpers gave Corba a try and switched to KParts. I'm no developper, so I can't judge this decision, but for me as user there seem to be only advantages of this.

However, isn't it obvious that a KDE developper in an article about KParts stresses the disadvantages of Corba *for the specific use within KDE* - BECAUSE this disadvantages were the reasons they dumped Corba and DEVELOPPED KParts?

Please... what advantages do kparts give you as a user? The component based approach provides the advantages - not the particular implementation. Have you even tried any bonobo-based applications? Didn't think so. From a user point of view they work exactly the same as kparts-based ones - the only difference being look'n feel due to differences in widget sets.

The article is just plain inflammatory - instead of pointing out the merits of kparts it starts out by bashing Bonobo, making several inaccurate statements about the Bonobo technology. It then concludes that Gnome is far behind KDE. WTF? This is supposed to be an article on "KDE components", not "KDE vs Gnome".

The first mentioning of KDE/Gnome/Corba I find in this article goes like this:

"In Gnome, the component technology is provided by Bonobo, which is built upon Corba. KDE also used Corba in the past, but eventually dumped it for an home-made technology: KPart. This choice has been very criticized although almost nobody understood the ground and the consequences of it. It was a good choice and I'll write an article one day, to explain why."

Now... what's your problem with this sentence? The author states facts, he doesn't even state that Gnome is worse off with Bonobo, he just says that Corba was bad for KDE's needs and that it was good (for KDE) to replace it with KParts. Some sentences later he even states that "Corba is a very good technology" - just not fit for KDE's needs. He just discusses the technology and why the KDE developpers chose to develop a new approach instead of using the existing Corba technology.

Considering Gnome being behind KDE in terms of component stuff - Hey, open your eyes, it is! Bonobo is existing, but it isn't really an integral part of Gnome as a whole yet. Yes, I give Gnome a try once in a while, but as of 1.4 it's not yet as integrated as KDE. Maybe 2.0 will change this. But 2.0 isn't released yet.

Discussing technical merits and pointing out the advantages of the technology KDE has chosen is definitely not inflammatory. I'd like to know how much of the article you did indeed understand or even read besides the few lines where the auther mentions Gnome.

Just because CORBA isn't a good solution for KDE, the author of that article talk about the disadvantages of CORBA in the beginning of the article.
However, nobody cares about that. People will just use the current solution because it works.
What reason is there then to talk about why CORBA is bad?
There is none. So I can only conclude that the author writes that to bash GNOME technologies.

The author claims that CORBA is slow. And he mentions Bonobo.However, the author didn't say that they used MICO, one of the slowest CORBA implementations in the world.
ORBit happens to be the fastest CORBA implementation, but the author didn't mentions anything about that.
This will cause people to think that all CORBA implementations are slow.
They will think that GNOME and Bonobo are slow, since they use CORBA.
Those people will tell other people that GNOME and Bonobo are slow, thus spreading the virus.
Those people will blindly believe what they've been told, and thus not even bother to try out GNOME.

That article does not infect people with some magical virus that causes them to hate GNOME. No-one who understands OSS at all is stupid enough to use a single, semi-related sentance from an article that has nothing to do with GNOME as the basis for their entire KDE/GNOME decision.

The author does not bash GNOME technologies, he just says that KDE uses DCOP, not Corba, and explains why that decision was made by the KDE devleopers.

When KDE was developing its component technology (more than 2 years ago), they were using Mico. They had optimised it and reduced its size to get better performance but it wasn't enough and a lot of problems were not due to mico itself.

Anyway, they couldn't have used Orbit, because at that time, Orbit was almost nothing. Even the C bindings were not stable. The C++ were ... planned! And KDE was needing components now.

So they moved to KParts and Shared Libraries. IMHO, this one of the best choices of KDE, after using Qt. KParts was developed very quickly and was very stable, as opposed to Orbit which took a long time to get developed. This has certainly had a negative impact on the development speed of Gnome component's technology.

Yes, good idea but I had problems with implementation of XML builders. Editing toolbars screw my 'Settings' menu and makes 'Help' menu disappear. I could not figure out anything from documentation - it consist of a few comments ( or I'm spoiled by detailed Qt docs ), so I had to copy a code from other "official" apps. It is just wonderful simple idea, but I'm lost in all those KXML.. classes and a few dozens methods. Implementation looks definitively too complicated.