Author
Topic: GUI Improvements (Read 18089 times)

I have used HADesigner, it's simply a wretched hack of a design. If it is so great then fix the fucking home screen with it or let's start talking about the problems with it and coming up with a solution to implement. I find it ironic that you put all this work into the MAME skin and making the screencasts to promote HADesigner, but haven't bothered to fix the most obvious and simple alignment issues. That's pathetic!

My god, bro. I'm trying to address your concerns, but it really seems to me that you're content on just yelling like a little baby until you get your way. This is not how things get done.

Why don't you help work on a new tool, then? Or are you too busy yelling at the top of your lungs?

Or wait, are you saying you can't 'fix' the home screen to your liking?

Do you not understand why I did those mame screens in the first place? the MAME Plugin/Player was my entry into how the system worked. I chose to do a media player/plugin pair because I wanted those media types, and they happened to be the most complex parts of the system. I needed a UI for them, so I needed to learn Designer. In the process, I studied Designer, it's code, and Orbiter, and its code, in great depth. I made the screencasts AS I was doing the work, So that OTHERS COULD BENEFIT FROM MY WORK, YOU TWAT! THE WORLD DOES NOT REVOLVE AROUND YOU, IF YOU WANT A BETTER MAIN SCREEN, HELP US FIX IT!

How long did you actually sit down and use Designer? It took me a few weeks to get the hang of it. I then did the screencasts immediately so that people could get over that hurdle more quickly.

What's wrong with HADesigner from your perspective? we all have our lists. And a new designer is being developed by Michael Wokoun (m1cha3l in IRC), in JAVA. It's proceeding at a slow pace because he's, well.. like the rest of us.. has a job, and a need to put bread on the table.

And I'm not PROMOTING HADesigner. I am showing the tools that we have, and how to get things done in them. The most difficult aspect of this is doing totally new layouts and design, which was the focus of those screencasts.

I stand firm on my own convictions of the system, because they are founded in the fact that I have dug my hands deep within the system, and learned precisely how it works. While I do have problems with implementation, the design is sound, and this extends to the concepts embodied in Designer and Orbiter....

Where are your facts to back up your assertions? Why don't you come to the table with some data?

los93sol, we all agree that many parts need improvements and a bit of fixing on the UI front.

But until you show up some ideas on how to link the XMBC skinning engine with our hierarchically UI concept, I see no point in further discussing that route. We've already been there with flash and some other guy..

Maybe you want to give a better explanation of what exactly you are thinking about. "Using the XMBC skin engine" is a bit short. It is not like we deny every alternative approach.

I lost the thread where this came up with before and I explained a bit about how XBMC's skinning engine is designed with a fall-through system that could be applied for supporting multiple platforms. The way their system works is you would have a folder structure like:

When a skin is loaded at PAL resolution, all the skin files are loaded from the PAL folder and used to display the skin.

When a skin is loaded for any other resolution for example, 720p, all skin files are grabbed from the PAL folder and any files that needed to be tweaked to scale properly to 720p are loaded from the 720p folder. So if you have a home.xml file in the PAL folder and not one in the 720p folder then home.xml is used and scaled to 720p from the PAL folder. If you have a system.xml file in both the PAL and the 720p folder then the one from the 720p folder is used.

This fall-through system could be used to support different or tweaked skins on different devices without the need to recreate the entire skin. Obviously some things should be designed to display on a TV, some on web pads w/touch screens, and some on smaller handheld devices like PDA's.

They also have a "theming" system in place that allows you to use the same skin files, but load images from different folders. What this means is you can have, for example, the same skin, but in different colors. This way a skin can be revamped to match your room better fairly quickly and with minimal effort by simply changing some color pallettes in photoshop.

It also allows the skinner to declare variables and call them using different routines so you can create skin-specific settings as well. When used properly a skinner has the option to let the user decide which screen they want to see. These settings could also aid with supporting multiple devices since obviously some PDA devices are touch screen, some aren't, some phones have full qwerty keyboards and some do not. There are several variables that can be accounted for using a system like this.

The architecture is what I feel really needs some discussion since I do not know everything about LMCE, but have been around and dug into XBMC code enough to know and understand how it works. My understanding on LMCE is that everything is exposed through device codes, and the entire system is controllable through these device codes. If that is correct then the XBMC skinning engine works quite similarly. They expose control of their software through action codes. So for example you could have an action code like mediaplayer.play or mediaplayer.skipforward. Those are then applied to a button or any other control in the XBMC skinning engine. I'd be interested to know more about what ideas you guys have as far as whether LMCE could be controlled in a similar manner. Again, my understanding is that this is very close to how LMCE works already.

There is another piece of the XBMC skinning engine that is very interesting, and that is the animations that can be done, a skinner can create a flashlike animated menu with as little as 10 incredibly simple lines in an xml file. Animations can be fades, slides, etc, and vectors can be applied to these animations for more advanced skinners.

They also expose a plethora of visibility conditions ranging from basic button.hasfocus() to any of the action codes available. There is literally no end to the possible combinations that can be used.

That said, this is a very brief introduction to what can be done with their engine. I personally would like to see some interest in integrating XBMC into LMCE and letting it handle media organization, playback, etc., but I realize that is likely a HUGE undertaking and one that would likely require a cooperative effort from several people.

I do agree that the current HADesigner is capable, but even with all the screencasts and the amount of work Thom has put in trying to help people understand it, the system is just too difficult for people to get their heads around.

Again, I did talk with JMarshall (wrote almost all of the skinning engine for XBMC), and he said if anyone from LMCE is interested to have them get ahold of him, he can be found on the boards at xbmc.org, or on #xbmc on EFNet. I'd like to hear what ideas you guys have to apply the features of XBMC's skinning engine to LMCE.

Just one more thing I'd like to point out is there are literally dozens of people who have a sound understanding of the xml based skinning system from the old underground xbox scene, most of these guys are still around and actively creating skins, and new ones are popping up all the time. Those users could very well show interest in applying their talents to LMCE as well if it were as simple as what they're used to. I've seen at least 5 LMCE users skinning XBMC in recent months so they are out there.

Thank you for your explanation, and I feel that we could use a hybrid of this in the long run.

With that said, you also need to keep in mind that there are a wide variety of devices that we have to target to. You haven't addressed these at all.

All of these devices have at their core vastly different CPU configurations of different speeds, as well as differing rendering architectures for graphics. For this reason, Orbiter was designed to be a lowest common denominator approach...Everything is designed around simple bitmaps, and simple actions, sending messages to various devices (usually the negative numbered virtual devices from Orbiter's standpoint.)

We have animations too, in the form of MNG. We don't have anything more capable, because again, we are trying to provide a base-line for all output devices in the system. This is not likely to change. I will not sacrifice support of one or more devices, just to make the TV screen more capable.

I suggest you spend some serious time looking at the code, to see why things were done the way they were.

I might be misunderstanding something here, but doesn't the fallthrough system address the multiple devices by allowing you to render different graphics, layouts, even actions per screen and per device? That system can be utilized to load up lower quality images, the defacto standard for XBMC is to use .png format, but it will also accept lower quality bitmaps, jpegs, etc.

Unfortunately, I'm not so good at code, I learned just enough c++ to get my hands a little dirty with XBMC, and was able to implement some things after much explanation from the developers there and countless hours for each small change. I will attempt to read through it and understand what's happening, but can you point me to some good starting points, what files in the source and any specific functions or classes I should be looking for.

sql2cpp is run on the build version of the database (you should never run it on your own local copy, because certain things are stripped out from a packaged release that are only used when building the release, but they are necessary to be in the resulting sql2cpp library.) to create the libraries pluto_main, pluto_media, pluto_telecom, etc. These libraries are used ALL over the system, and especially in Orbiter, to abstract access to the database. If you look inside pluto_main/ , you will find a whole set of Define_xxxxxxx.h files. These are constants defined from the individual Define columns in each part of the database. This is used to distinctly reference a database element inside the code, and is used quite a bit. DESIGNOBJTYPE_Web_Browser_CONST for example, would be referring to the row in the database that contains the Description Web Browser. Use this to trace the relationships between the code and the database.

Orbiter/ itself, is where you'll be spending most of your time. It is a beast. well over 200,000 lines of code. The vast majority of it is screen logic, with rendering parts pushed out to sub-classes. The majority of the stuff that makes orbiter work is in Orbiter.cpp. Here you'll find the different designobj types, what they can do, as well as variable substitutions, etc. All of the different devices use the same code. The orbiter itself comes in two main flavors, a fat client, which pushes the graphics to the target device, and orbiter itself runs on the target device, sending DCE messages back, and a proxy client, which is how the Cisco 7970, Web Orbiter, and Mobile Phone orbiters work. These work by assembling things together just like the fat clients, but instead render out a flattened image, which is then pushed to the clients, which accept the image, and merely transmit back "button presses".. regardless of this, the same code is used, just a different rendering subclass.

It must be noted that since Orbiter depends heavily on the database, and we have a shared database infrastructure in place, building a theme packet format is not practical. Instead, theme developers should work in a small grouping of teams, one of them having a username to our sqlcvs repository, and submitting stuff for it as needed, as well as providing the necessary skin directory so that a package can be made. This ultimately means, that new skins become available to everyone as they update their database, and they can be selected as needed, with the system downloading the skin package as needed. Once a skin has been installed, a renegeration is needed, and the skin will be used from that point on.

Anything done in designer, has to be generated. This entails pre-rendering output versions of graphics, so that they can be used by orbiter. OrbiterGen does this. OrbiterGen takes the database definitions, and tries to figure out the largest pieces of a designobj to render into one graphic (it tries to reduce to as few rendered graphics as possible, so if you have a static screen that doesn't change, it will render all of the pieces on it as a single graphic.) This not only includes statically placed designobjs, but also designobjs of type Array (when you see a row of buttons such as the scenario buttons? those are button arrays), which are replicated and then rendered.

It's worth noting that we use PNGs and MNGs exclusively, alpha channels can be used, and are used quite a bit. Depending on the orbiter implementation, a background color may or may not be rendered (in UI1, all transparent pixels take on the background image.).

In the spirit of pre-rendering the images, any MNG graphics are ALSO pre-rendered. That is, each and every frame is output, and composited together with its underlying exposed pieces. This also means, that if you have potentially any overlapping animation, interesting artifacts may result due to OrbiterGen not able to resolve the fact that a completely independent state of frames needs to be rendered.

I don't want to get kicked too by joining this discussion, but wouldn't it be possible to do a "quick" (without the need of a to detailed view of the lmce code) demo XBMC orbiter? Just as a prove of concept...

Something like: - add new regular orbiter for the same system as the XBMC orbiter stays on- use a smb dir on core for skin and xml files- write a small app able to capture/send messages (with regular orbiters device id created before) and use XBMC as skinning engine

I don't want to get kicked too by joining this discussion, but wouldn't it be possible to do a "quick" (without the need of a to detailed view of the lmce code) demo XBMC orbiter? Just as a prove of concept...

Something like: - add new regular orbiter for the same system as the XBMC orbiter stays on- use a smb dir on core for skin and xml files- write a small app able to capture/send messages (with regular orbiters device id created before) and use XBMC as skinning engine

I don't see how it could prove the concept unless it was a replacement of the on screen orbiter. That would mean completely unthreading the existing on screen orbiter device so that the POC orbiter could overlay itself on top of xine/pss/etc. And it would then need to handle all the functionality of the existing orbiter, which even without having read the code myself, would be a massive undertaking as I assume that the current orbiter is doing far more than just displaying screens, capturing clicks and sending/receiving a few DCE messages.... what about all the interfacing with the SQL database to determine command paths down pipes, getting media lists, etc....

It would be nice to see a POC, I just don't see any way of doing this "quick"ly I think if you want to know what it could look like, look at XBMC, the question really is more about how much work is involved in overlaying the UI part on top of all the other orbiter functionality and also making the UI able to scale and target different devices - and the answer to that seems to be "lots"! I don't think anybody is saying that another UI engine can't be used, the issue is that the orbiter is much more than a UI engine....

Okay, I've been thinking more about this and I would like to see what your thoughts are on this as a proof of concept. It is apparent that to do this properly it would require a serious amount of work, but I think I know how to do this in such a way that it could be done quickly. XBMC supports python scripting that can be run from within the skin in numerous ways. Since LMCE is designed so that its functions can be called from various devices I would assume that means a python script could be written to act as a liason so LMCE would effectively be run in the background of XBMC, but it would demonstrate what can be done for LMCE skinning through their engine.

Am I correct in thinking that the fundamental issue here is being able to have multiple UI engines using the same definition files for rendering screens? Meaning that you design a screen once, then the UI engine for UI1, UI2, UI2+AB, JavaMO UI, Win32, etc all need to use the exact same design files, but render them in different ways using their own engine. The point being that any new/modified screens are only done once, and you can guarantee that all the different UI engines will be able to render the same screen on any device, albeit that they will obviously look different.