The whole question, summarised for answerability:

How would it be best for a designer to define really complex UIs?

Would it be a good approach to map how the mental creative process works, into tools?

Someday I want to make a better UI designer for desktop applications than what currently exists (it's a long-term project), and I'm already trying to gather thoughts about whether it's possible or not to have a truly extensible framework.

The main problem I see is that you need to be able to iterate between high-level abstractions, such as interactions between components, and low-level capabilities such as applying blur to a semi-transparent surface. This requires good models of what a user interface is composed of.

An "animation" could be anything, from applying effects to an image (like a blur, or noise, etc.) to translating objects in 3D, to making a flare of light travel through a path while filling it, to anything one can imagine.

Interactions are also complex. Maybe you don't want one animation to happen if another one is underway, or if some data is not present, etc., or maybe just trigger some action on some event.

Defining all of this from code, whatever code that may be, is not ideal. I want opinions on how such a thing could be "ideal", or just better, visually, from a designer's perspective.

In other words, how would the UI/UX design environment of your dreams be?

(note: these problems apply to website or mobile design as well, so if you have opinions in those fields, they're welcome too).

Update:

I'll expose some thoughts I have about how it could be done.

First and foremost, one of the objectives is to make a complete separation between the designer's work and the programmer's work.

It is of my belief that the UI and UX should be considered before writing any application logic - sometimes, by seeing a GUI one can better see which features are missing and which are unnecessary clutter.
So the designer should be able to create complex interactions in the UI, even if this UI executes no code. This also makes it easier to debug the UI.

So, an application would expose an API to the UI, composed of:

The designer will also have tools that provide meaningless sources of data in a customised format, so he/she might databind to "fake properties" on debug configuration, much like a Lorem Ipsum.

Properties exposed to the designer can't be any object - they'll either be a number of simple generic datatypes, such as "Text" (string), Integer, "Decimal", or even "Image" (by simple I mean simple for the designer, more about how to handle performance issues later), Arrays of some simple type, or "composite data", similar to the notion of Tuples, to group several datatypes or arrays.
This means the designer can simply ask for the data he/she needs or generate some fake data for debug purposes easily.

Now, what about performance?

Well, we all know generating the same image over and over would be expensive, but the designer is not meant to think too much about this. Instead, repetitive request to the same element would be cached by an intermediary between application code (made by the programmer) and the UI, in the API.

I'm still thinking about the implementation details of this, but it will mean a programmer could tell the API calling the same function with the same parameters over and over will generate the same result and the API will cache the results of this function smartly.
In any way this could be implemented, it will be transparent to the designer, as that's the objective.

About some concepts I'm thinking on for UI composition:

(quite self-explanatory terms)

Surfaces: Basic components of user interface that have properties and a shape, and are bound to a material. Internally represented in vector format.Material: Shared element in the UI that can define in one place the style of several elements (such as gradient or texture fills, borders, drop shadows, etc).Components: Pieces of UI with logic of their own, that could do more advanced things than surfaces, such as generating particle effects, linking to other components in a mutually interactive way and even generating sub-components of their own. This will make certain effects like an animated "loading" circle of circles (like this one) more simple to make. They will contain a scripting language, maybe a Javascript engine.Effects/Transitions: These are intended to be triggered by other parts of the UI or changes in state. Things such as ButtonFoo.Glow or PanelBar.MoveToLeft named and designed by the designer and grouped in a namespace-like system.States: These represent the current state of the UI, and can be used by triggers.Triggers: These fire certain effects or transitions.Conditions: These are checks that can be attached to triggers.

The designer will also be able to expose data to the programmer, by a system that looks similar to the consumed properties from the designer's side.

Have you looked at something like XUL or XAML? Or am I misunderstanding the question?
–
Rahul♦Nov 30 '10 at 23:50

I have looked a bit at XUL and XAML, but I'm talking about a technology that separates design and code in a more advanced way. I'm about to post an update with some extra thoughts.
–
Camilo MartinDec 1 '10 at 0:10

By the way, XUL is cool in theory but the lack of any support by any tool besides Firefox makes it kind of risky to use I guess. And XAML is like XUL but with .NET makeup and some proprietary tools.
–
Camilo MartinDec 1 '10 at 0:38

1

Hi Camilo, now that you've updated your question I'm finding it hard to find an actual question in there. You have a lot of ideas, but this is reading more and more like a blog post outlining your thoughts on how your ideal design IDE/framework would work and less like a question that can be reasonably answered. You should consider breaking the question up into smaller chunks that are answerable.
–
Rahul♦Dec 10 '10 at 9:26

@Rahul, thanks for the suggestion, done a summary. If it's still not enough please tell me, as I also fear most people may go tl;dr on this :scratches head:
–
Camilo MartinDec 10 '10 at 19:03

1 Answer
1

I think, user may be focused on one thing at the time: it can be some view in view hierarchy (1), otherwise when we have many interaction views at the same time, these view should be (will be good) independent (2). So in the (1) case application can control state current view, and at the (2) case application does not need to synchronize state of independent views.

Thanks for the ideas! :) Maybe a pluggable component system would be good both for extensibility and for the mental model it provides - Something similar to "controls"/"components" in most UI toolkits but more extensible and more focused on visual rather than code editing.
–
Camilo MartinDec 10 '10 at 19:13