An Artificial Intelligence GUI: AI operating at the most fundamental drawing and input levelsa GUI that could potentialy predict user's actionat the resolution of mouse movementsThat would be like... a tactile AGI

Fluxus - Livecoding Environment

Using AI to do tactile or physical reasoningWhich is ok... but textual reasoning is an entire different realmsure theres multiple models that we could insert in-between. like a physics enginei mean a pre-made physics engine, like Bulletphysics has the inherent feature of preventing two solid objects from consuming the same points in space

which is what graph layout and GUI design really is all aboutnow what we need is a time equivalent for deciding what objects are presented to the usernot everything at once, but a subset of the KBand how that subset's boundaries change (expand/contract buttons, etc)

The Connection Between a Physical World and GUI

So, the physical objects are not really objects... what are they supposed to be?reprsentations of data in the systema button represents a possible action that can be invoked, a text represents a string, etcSuch as... a file? a document?some data objects will have multiple reprsenations. then a meta-representation allows the user to select amongst them, or to adjust properties of the visualization. ex: color, size etcAnd they "feel" like physical objects... is that the idea?the larger idea is that we can instantiate virtual agents in the space, making it self-aware as a cybernetic systemit traces optic rays. mouse pointers are just a kind of light.and the retina of virtual agents floating around can see the space from inside it.a virtual agent is an embodiment of a program. It's got to have algorithms - with inputs and outputs that change in realtime... otherwise it would seem dead or sleeping.

How can you represent algorithms in physical space?

text, graph vis, data flow, class hierarchy. anything and everything imaginableOr, maybe one algorithm is a box or cube-like thing?sure an algorithm could be a cube which when clicked, explodes into a program treeor the reason why something happens can have its proof tree attached semi-transparentlyOh I see... very ambitious idea... but inevitableyes its the boundary between artificial intelligence and 'artificial life' fieldsprogramming would consist of text-entry and drawing lines between thingswires, or rope - which can also be physically modeledand have other objects attached to them (reification)text is just another input modality. you would be free to use text-onlyhowever you could still use some of the navigation possibilities while entering textlike advanced consoles

A New Form of Communication

i think its possible that a new form of communication can be inventedbased on fractal languagewhich isnt content with just character-based textbut where size, position, etc matterrelative to other wordsfor example i could implement a zooming file-tree navigatorPerhaps that is what KB engineering is ableWhat you deal with is not programs, but KB itemswell ultimately, all of these constructs can be encoded logicallyperhaps learnable, from example or demonstration"this rectangle is called a window. its x in the upper right corner that when clicked, closes it."it will be interesting when the AI starts to generate new programs, surprising usand find new ways of unifying different UI modelsunifying or abstracting themthis will also allow the GUI to self-transformaccording to human preferencesInventing KB items is equivalent to generating new programsonce the system can be programmed from within itself, it will be fully reflectivethats the point where netbeans isnt necessary. in fact most programs we use will be generalized into this system's functionalityall forms of content creation and editing, communication, programming, information navigating, etclike the construct in the matrix

this could fruitfully applied to KB engineeringWe definitely need a cool GUI for manipulating the KB in Genifer

Where Did This Come From?

using a logic engine to learn OpenGL drawing commandsand interpret GUI inputa re-interpretation of the MVC patterncorrect MVC could apply to anything, console or even voice-onlyOpenGL is just the canvas to draw onOpenGL provides a complete set of drawing and FX primitivesit could also be HTMLalso ive included WebGL which is soon to become standardits in the beta versions of firefox, chrome and prolly some others

An Artificial Intelligence GUI: AI operating at the most fundamental drawing and input levelsa GUI that could potentialy predict user's actionat the resolution of mouse movementsThat would be like... a tactile AGI

Fluxus - Livecoding Environment

Using AI to do tactile or physical reasoningWhich is ok... but textual reasoning is an entire different realmsure theres multiple models that we could insert in-between. like a physics enginei mean a pre-made physics engine, like Bulletphysics has the inherent feature of preventing two solid objects from consuming the same points in space

which is what graph layout and GUI design really is all aboutnow what we need is a time equivalent for deciding what objects are presented to the usernot everything at once, but a subset of the KBand how that subset's boundaries change (expand/contract buttons, etc)

The Connection Between a Physical World and GUI

So, the physical objects are not really objects... what are they supposed to be?reprsentations of data in the systema button represents a possible action that can be invoked, a text represents a string, etcSuch as... a file? a document?some data objects will have multiple reprsenations. then a meta-representation allows the user to select amongst them, or to adjust properties of the visualization. ex: color, size etcAnd they "feel" like physical objects... is that the idea?the larger idea is that we can instantiate virtual agents in the space, making it self-aware as a cybernetic systemit traces optic rays. mouse pointers are just a kind of light.and the retina of virtual agents floating around can see the space from inside it.a virtual agent is an embodiment of a program. It's got to have algorithms - with inputs and outputs that change in realtime... otherwise it would seem dead or sleeping.

How can you represent algorithms in physical space?

text, graph vis, data flow, class hierarchy. anything and everything imaginableOr, maybe one algorithm is a box or cube-like thing?sure an algorithm could be a cube which when clicked, explodes into a program treeor the reason why something happens can have its proof tree attached semi-transparentlyOh I see... very ambitious idea... but inevitableyes its the boundary between artificial intelligence and 'artificial life' fieldsprogramming would consist of text-entry and drawing lines between thingswires, or rope - which can also be physically modeledand have other objects attached to them (reification)text is just another input modality. you would be free to use text-onlyhowever you could still use some of the navigation possibilities while entering textlike advanced consoles

A New Form of Communication

i think its possible that a new form of communication can be inventedbased on fractal languagewhich isnt content with just character-based textbut where size, position, etc matterrelative to other wordsfor example i could implement a zooming file-tree navigatorPerhaps that is what KB engineering is ableWhat you deal with is not programs, but KB itemswell ultimately, all of these constructs can be encoded logicallyperhaps learnable, from example or demonstration"this rectangle is called a window. its x in the upper right corner that when clicked, closes it."it will be interesting when the AI starts to generate new programs, surprising usand find new ways of unifying different UI modelsunifying or abstracting themthis will also allow the GUI to self-transformaccording to human preferencesInventing KB items is equivalent to generating new programsonce the system can be programmed from within itself, it will be fully reflectivethats the point where netbeans isnt necessary. in fact most programs we use will be generalized into this system's functionalityall forms of content creation and editing, communication, programming, information navigating, etclike the construct in the matrix

this could fruitfully applied to KB engineeringWe definitely need a cool GUI for manipulating the KB in Genifer

Where Did This Come From?

using a logic engine to learn OpenGL drawing commandsand interpret GUI inputa re-interpretation of the MVC patterncorrect MVC could apply to anything, console or even voice-onlyOpenGL is just the canvas to draw onOpenGL provides a complete set of drawing and FX primitivesit could also be HTMLalso ive included WebGL which is soon to become standardits in the beta versions of firefox, chrome and prolly some others

An Artificial Intelligence GUI: AI operating at the most fundamental drawing and input levelsa GUI that could potentialy predict user's actionat the resolution of mouse movementsThat would be like... a tactile AGI

Fluxus - Livecoding Environment

Using AI to do tactile or physical reasoningWhich is ok... but textual reasoning is an entire different realmsure theres multiple models that we could insert in-between. like a physics enginei mean a pre-made physics engine, like Bulletphysics has the inherent feature of preventing two solid objects from consuming the same points in space

which is what graph layout and GUI design really is all aboutnow what we need is a time equivalent for deciding what objects are presented to the usernot everything at once, but a subset of the KBand how that subset's boundaries change (expand/contract buttons, etc)

The Connection Between a Physical World and GUI

So, the physical objects are not really objects... what are they supposed to be?reprsentations of data in the systema button represents a possible action that can be invoked, a text represents a string, etcSuch as... a file? a document?some data objects will have multiple reprsenations. then a meta-representation allows the user to select amongst them, or to adjust properties of the visualization. ex: color, size etcAnd they "feel" like physical objects... is that the idea?the larger idea is that we can instantiate virtual agents in the space, making it self-aware as a cybernetic systemit traces optic rays. mouse pointers are just a kind of light.and the retina of virtual agents floating around can see the space from inside it.a virtual agent is an embodiment of a program. It's got to have algorithms - with inputs and outputs that change in realtime... otherwise it would seem dead or sleeping.

How can you represent algorithms in physical space?

text, graph vis, data flow, class hierarchy. anything and everything imaginableOr, maybe one algorithm is a box or cube-like thing?sure an algorithm could be a cube which when clicked, explodes into a program treeor the reason why something happens can have its proof tree attached semi-transparentlyOh I see... very ambitious idea... but inevitableyes its the boundary between artificial intelligence and 'artificial life' fieldsprogramming would consist of text-entry and drawing lines between thingswires, or rope - which can also be physically modeledand have other objects attached to them (reification)text is just another input modality. you would be free to use text-onlyhowever you could still use some of the navigation possibilities while entering textlike advanced consoles

A New Form of Communication

i think its possible that a new form of communication can be inventedbased on fractal languagewhich isnt content with just character-based textbut where size, position, etc matterrelative to other wordsfor example i could implement a zooming file-tree navigatorPerhaps that is what KB engineering is ableWhat you deal with is not programs, but KB itemswell ultimately, all of these constructs can be encoded logicallyperhaps learnable, from example or demonstration"this rectangle is called a window. its x in the upper right corner that when clicked, closes it."it will be interesting when the AI starts to generate new programs, surprising usand find new ways of unifying different UI modelsunifying or abstracting themthis will also allow the GUI to self-transformaccording to human preferencesInventing KB items is equivalent to generating new programsonce the system can be programmed from within itself, it will be fully reflectivethats the point where netbeans isnt necessary. in fact most programs we use will be generalized into this system's functionalityall forms of content creation and editing, communication, programming, information navigating, etclike the construct in the matrix

this could fruitfully applied to KB engineeringWe definitely need a cool GUI for manipulating the KB in Genifer

Where Did This Come From?

using a logic engine to learn OpenGL drawing commandsand interpret GUI inputa re-interpretation of the MVC patterncorrect MVC could apply to anything, console or even voice-onlyOpenGL is just the canvas to draw onOpenGL provides a complete set of drawing and FX primitivesit could also be HTMLalso ive included WebGL which is soon to become standardits in the beta versions of firefox, chrome and prolly some others