But the fact of actually storing the very drawing sequence of the character strokes would do text recognition much more effective.

You'll find lots of prior work in this area for Chinese (and similar) text. Handwriting recognition is often the fastest way to look up an unfamiliar character, especially if the user isn't familiar with their dictionary's indexing method.

I'm still not sure that this is the most efficient method though, particularly when you have to wait half a second between letters while the handwriting system decides if you're finished writing whereas if you wrote the command name out in full you would only have to wait once and if you're a regular user you'll probably be able to write it in under half a second.

Why would "open file" (and waiting once) take less time to write than "O" (and waiting once)?

Because you don't just write "O" and wait once; you write "F", wait for the menu to open, then write "O"; "alt-o" in LibreOffice Writer opens the "Format" menu.

For my OS and applications designed for my OS, assume you write "O" (and wait once).

Existing applications for existing OSs (e.g. LibreOffice) are completely irrelevant. Every single thing applications rely on is radically different for my OS.

onlyonemac wrote:

Also, what if I want to enter underlined text (in a natural way, *without* having to use the formatting menus designed for a keyboard-and-mouse interface)?

For what I've described, you couldn't enter a single underlined character (but could enter an underlined word). Of course what I've described is only one of the many possible ways that a handwriting recognition system could be implemented (and is just something that I "invented" without much research into the most effective way of designing a handwriting recognition system); and changing the way the handwriting recognition system works would make absolutely no difference to the way front-ends work.

For a simple example; maybe it would be better to have an "enter as text" button and an "enter as command" button, and have no delay after the user writes something to ensure they're finished. It makes no difference to any application's front-end how it's done.

onlyonemac wrote:

Brendan wrote:

(although I don't see why a toolbar couldn't have "sub-toolbars" either)

We're talking about a toolbox, not a toolbar - they're different things and are used in different situations. Furthermore, enforcing the use of submenus/subtoolbars/sub-anything-elses for frequently-used commands which users will want to access quickly is a bad design principle that will reduce efficiency for all users. Perhaps you should do some research on current interface design (and existing studies on user behaviour patterns and experiences) before you start trying to design your own interface.

Sigh. I described "pop-up bubbles to show the command for each picture in the toolbar/toolbox" (so that the user can discover the command to use as a keyboard shortcut or by writing an underlined letter for the handwriting system, and wouldn't need to use the menu/toolbar/toolbox for frequently used things after they've discovered the command). From this you decide that I'd be forcing users to always use the menu/toolbar/toolbox for frequently used things, and preventing them from using the commands (as keyboard shortcuts or underlined characters)?

onlyonemac wrote:

Brendan wrote:

How exactly do you expect handwriting would work for something like drawing on a canvas? Make the poor user write "select the 123rd pixel from the left and change it to green" by hand, for every single pixel they want changed, simply because you don't want to admit that "touchpad mode" is far more efficient?

Don't use a handwriting device for drawing, end of story. Use a pointing device (mouse, graphics tablet, etc.) that's designed for graphical work, and don't try to make a completely separate class of input device try to perform the same functions.

Um, what? We've taken what is essentially a graphics tablet and added handwriting recognition software to that hardware to create a handwriting recognition system; and now you're trying to tell me that the graphics tablet shouldn't be used as a graphics tablet, because a graphics tablet is completely separate class of input device to a graphics tablet?

onlyonemac wrote:

Brendan wrote:

You still haven't described or explained anything; you just continually repeat the same opinion without ever backing it up with anything other than a never ending stream of nonsense and distractions.

Perhaps if you actually read what I said and considered it then you wouldn't consider it a "stream of nonsense and distractions". If this thread is distracting you from writing your operating system then how about you either

listen to what I say

or admit that you will never listen and just stop posting

?

Nobody (including me) can prove that no front end will ever need to care what type of input device is being used - to prove this it would require an exhaustive search of all possible applications (including applications that don't exist yet, and input devices that don't exist yet). This means that there is an (extremely tiny) risk that I might spend years implementing this system and find that there are a small number of problem cases after it's too late to avoid years of work. Essentially; if there is a case where a front end needs to care what the input device is then I have a strong incentive to find it sooner rather than later.

You think front ends do need to care what the input device is; and I have a strong incentive to find out why you think this; but you have been unable to explain why you think front ends do need to care what the input device is. For me it is a frustrating exercise, like using a stick of butter to drill a hole in concrete, because nothing seems to get any closer to finding out why you think front ends do need to care what the input device is. It's extremely likely that you are wrong; but you continually reassert the "front ends need to care what the input devices are" claim as if you have a reason for your opinion.

onlyonemac wrote:

Someone mentioned a few days ago an important difference between a handwriting recognition system and a mouse/touchpad, which happened to be a difference that I had not thought of but which is true nevertheless. Maybe you would like to go back and find what they said.

~ posted a reasonable overview of hand writing recognition technologyRusky posted a suggestion that "a DAW-" (I assumed a digital audio workstation) is a crazier example than the examples (graphics editor, audio/video editor) that you provided (but didn't say anything to indicate a digital audio workstation would need to care what the input device/s are).tjmonk posted that they see no problem with a system like what I've described (except for the time involved in doing it well)Combuster got confused and thought we were talking about differences between different input devices from the user's perspective (of which there are many) and not talking about differences between different input devices from the front-end's perspective (of which there may be none).DavidCooper posted something that seems to disagree with your ideas, but (I suspect) was at least partially intended to stimulate some discussion about David's "sound screen" idea. Note: While I do find the "sound screen" idea interesting, my system is designed for 3D and I'm not sure how the idea could be extended to work for 3D content, so I've not commented on that idea.DavidCooper posted comments about ergonomics (and that traditional "desktop" is unhealthy), and that if people are able to work efficiently without a screen they'd be happy to do so.tjmonk15 suggested that you are being "intentionally obtuse/dense, or aren't smart enough to have any meaningful input in this field. (Or I guess, don't understand english well enough)" in response to comments you made about navigating menus using voice commands.

I don't know which of these you think "mentioned an important difference between a handwriting recognition system and a mouse/touchpad". I suspect that you are hallucinating; and that if an important difference actually was mentioned (that neither of us thought of before) you would have remembered the reason and stated it instead of alluding to a post that seems to have never existed.

Cheers,

Brendan

_________________For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.

Maybe one of the most proven in real life programs is the one Stephen Hawking uses, called ACAT according to the news from around August of past year 2015 (probably there are more links found by Google):

For my OS and applications designed for my OS, assume you write "O" (and wait once).

Justify that. Explain how that's going to work, without ending up with conflicts once we've got more than 26 commands (remember that every command needs a shortcut letter, for your blind users who aren't being given a proper interface). (Also note the difference between keyboard shortcuts e.g. "ctrl-s" and the underlined letters in menus e.g. "alt-f, s".)

Brendan wrote:

Of course what I've described is only one of the many possible ways that a handwriting recognition system could be implemented (and is just something that I "invented" without much research into the most effective way of designing a handwriting recognition system); and changing the way the handwriting recognition system works would make absolutely no difference to the way front-ends work.

Yet when I "invented" an equally plausible way for the system to be implemented you discarded it without any consideration?

Brendan wrote:

From this you decide that I'd be forcing users to always use the menu/toolbar/toolbox for frequently used things, and preventing them from using the commands (as keyboard shortcuts or underlined characters)?

No, the toolbars and toolboxes are for mouse (and other pointing device) users; the keyboard shortcuts and underlined characters are for keyboard users (and, in your fantasy world, handwriting recognition users as well).

Brendan wrote:

Um, what? We've taken what is essentially a graphics tablet and added handwriting recognition software to that hardware to create a handwriting recognition system; and now you're trying to tell me that the graphics tablet shouldn't be used as a graphics tablet, because a graphics tablet is completely separate class of input device to a graphics tablet?

It doesn't matter what the underlying hardware is, we're talking about handwriting recognition systems here. If the actual hardware is a graphics tablet and it can also work as a graphics tablet and we're using it as a graphics tablet then that's a separate input device to when it's being used with handwriting recognition software and being treated as a handwriting recognition input device.

Brendan wrote:

Combuster got confused and thought we were talking about differences between different input devices from the user's perspective (of which there are many) and not talking about differences between different input devices from the front-end's perspective (of which there may be none).

He didn't get confused; you did. He's talking about the dexterity of control over the devices in question and how those influence the way that they are used.

If you really insist on taking this approach, why not try this:

Divide input devices into categories. I see three main categories: pointing devices (mice, touchscreens, trackpads, trackballs, analogue joysticks, etc. - anything that can input an absolute or relative location or movement along two or more axes and a select action), text input devices (keyboards, voice recognition, handwriting recognition, OCR, etc. - anything that can input text characters), and directional navigation devices (4-way keypads, digital joysticks, switch-access devices, etc. - anything that's got controls to navigate in two or more directions and a control to select things). There may be additional categories needed (for example, a scanner and a camera could fall into a separate category, but if they're used for OCR or recognising sign language then they're text input, and if they're used for navigation through hand gestures then they're either pointing devices or directional navigation devices depending on the system used) or there may be another way entirely to classify devices, but the point is that we're grouping devices into groups with similar characteristics. Note also that devices may fall into one or more categories depending on the mode used, for example a keyboard can be used as either a text input device or a directional navigation device, and as you suggested the handwriting recognition system's hardware can also be used as a pointing device (although then it no longer falls into the category of a handwriting recognition system and isn't a text input device; it's a separate device entirely).

Then you can design your interface system so that it can work with any number of devices from any number of categories. So for example the word processor takes text from a text input device and can also take keyboard shortcuts with a modifier or commands written on a handwriting recognition device with a "modifier" gesture. But it also has menus and toolbars for use with a pointing device and an on-screen keyboard (although few users would actually use this seriously), and the menus can also be navigated using a directional navigation device and again there can be an on-screen keyboard for entering text. The graphics editor allows the user to draw with a pointing device and provides toolboxes for frequently-used tools and a menubar for all the available options, but users can also draw with a directional navigation device (there are many ways this could be done - you could have a path-based system whereby the user can enter and modify nodes in a drawing path before confirming the path to have it applied, or you could have a plotting-style system for straight lines, or whatever else that allows mostly freeform shapes to be drawn with a directional device) and again they could navigate the menus with the directional device; using a text input device for a graphics editor could prove a little tricky and I'd like to see you suggest some way of doing this that isn't just using a handwriting recognition system as a pointing device. As we can see, once we group input devices into categories with similar characteristics it is easy to create interfaces that are optimised for the characteristics of each group of input devices and allow users to get the full benefit from their particular input device (whether they're using that input device out of choice or necessity is irrelevant); it just requires the application/frontend to be aware of what input device is used and present an appropriate interface.

_________________When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

For my OS and applications designed for my OS, assume you write "O" (and wait once).

Justify that. Explain how that's going to work, without ending up with conflicts once we've got more than 26 commands (remember that every command needs a shortcut letter, for your blind users who aren't being given a proper interface). (Also note the difference between keyboard shortcuts e.g. "ctrl-s" and the underlined letters in menus e.g. "alt-f, s".)

If there's more than 26 frequently used commands the application is probably a hideous mess that needs redesigning; and for rarely used things nobody is going to remember them and the user will end up using other things (main menu, context sensitive menu, toolbar, whatever) regardless.

onlyonemac wrote:

Brendan wrote:

Of course what I've described is only one of the many possible ways that a handwriting recognition system could be implemented (and is just something that I "invented" without much research into the most effective way of designing a handwriting recognition system); and changing the way the handwriting recognition system works would make absolutely no difference to the way front-ends work.

Yet when I "invented" an equally plausible way for the system to be implemented you discarded it without any consideration?

There's many different ways that a handwriting recognition system could be designed which work with a "front end doesn't care what the input device is" system; and also many different ways that a handwriting recognition system could be designed that don't work in conjunction with the "front end doesn't care what the input device is" system. The existence of the latter doesn't prove the former is impossible or undesirable.

onlyonemac wrote:

Brendan wrote:

From this you decide that I'd be forcing users to always use the menu/toolbar/toolbox for frequently used things, and preventing them from using the commands (as keyboard shortcuts or underlined characters)?

No, the toolbars and toolboxes are for mouse (and other pointing device) users; the keyboard shortcuts and underlined characters are for keyboard users (and, in your fantasy world, handwriting recognition users as well).

No. The menus, toolbars, toolboxes and whatever are for all input devices and for discoverability (to provide a way for user to know what the application's commands are).

onlyonemac wrote:

Brendan wrote:

Um, what? We've taken what is essentially a graphics tablet and added handwriting recognition software to that hardware to create a handwriting recognition system; and now you're trying to tell me that the graphics tablet shouldn't be used as a graphics tablet, because a graphics tablet is completely separate class of input device to a graphics tablet?

It doesn't matter what the underlying hardware is, we're talking about handwriting recognition systems here. If the actual hardware is a graphics tablet and it can also work as a graphics tablet and we're using it as a graphics tablet then that's a separate input device to when it's being used with handwriting recognition software and being treated as a handwriting recognition input device.

I am amazed at your ability to stretch reality into a twisted pile of nonsense. My keyboard is 2 completely separate input devices, one that does capital letters (unless I hold down the shift key) and another that does lower case letters (unless I hold down the shift key). It's not one device with 2 modes at all.

onlyonemac wrote:

Brendan wrote:

Combuster got confused and thought we were talking about differences between different input devices from the user's perspective (of which there are many) and not talking about differences between different input devices from the front-end's perspective (of which there may be none).

He didn't get confused; you did. He's talking about the dexterity of control over the devices in question and how those influence the way that they are used.

It might influence which devices the user chooses to purchase/use; but makes no difference to any application's front-end.

onlyonemac wrote:

If you really insist on taking this approach, why not try this:

Divide input devices into categories. I see three main categories: pointing devices (mice, touchscreens, trackpads, trackballs, analogue joysticks, etc. - anything that can input an absolute or relative location or movement along two or more axes and a select action), text input devices (keyboards, voice recognition, handwriting recognition, OCR, etc. - anything that can input text characters), and directional navigation devices (4-way keypads, digital joysticks, switch-access devices, etc. - anything that's got controls to navigate in two or more directions and a control to select things). There may be additional categories needed (for example, a scanner and a camera could fall into a separate category, but if they're used for OCR or recognising sign language then they're text input, and if they're used for navigation through hand gestures then they're either pointing devices or directional navigation devices depending on the system used) or there may be another way entirely to classify devices, but the point is that we're grouping devices into groups with similar characteristics. Note also that devices may fall into one or more categories depending on the mode used, for example a keyboard can be used as either a text input device or a directional navigation device, and as you suggested the handwriting recognition system's hardware can also be used as a pointing device (although then it no longer falls into the category of a handwriting recognition system and isn't a text input device; it's a separate device entirely).

OK; but note that:

A directional navigation device can be used to emulate a pointing device (just with less control over the speed of movement)

Any pointing device (and therefore any directional navigation device), in conjunction with an "on screen virtual keyboard", can emulate a text input device (and that this is extremely common - e.g. smartphones, tablets)

Any text input device can have a method or mode where it emulates a pointing device or directional navigation device (even if this means using words like "up" to represent movement)

An input device emulating a different category may be worse than a device intended for that category; but an input device emulating a different category is better than nothing when no other device exists.

The combinations above imply that all categories of input devices are able to emulate all other categories of input devices; which means that all input devices are able to generate all events to send to a front-end.

Each category of input devices has a small set of events intended for it's native "no emulation". These events can be combined into a super-set of events for all input devices

The super-set events would have categories too - e.g. "pointing device events", "commands", and "literal characters/text"; but (partly because all input devices are able to generate all events anyway) there's no reason for a front-end to care what the input device/s actually are, or (when there are multiple different input devices being used simultaneously) which input device sent an event.

onlyonemac wrote:

Then you can design your interface system so that it can work with any number of devices from any number of categories. So for example the word processor takes text from a text input device and can also take keyboard shortcuts with a modifier or commands written on a handwriting recognition device with a "modifier" gesture. But it also has menus and toolbars for use with a pointing device and an on-screen keyboard (although few users would actually use this seriously), and the menus can also be navigated using a directional navigation device and again there can be an on-screen keyboard for entering text.

You've just described a word-processor using a single front-end for all input devices; that has no need to care if (e.g.) text came from a keyboard or a pointing device (using an on screen keyboard) or handwriting recognition system; and has no reason to care if the commands came from keyboard shortcuts or were handwrittten or were entered via. menus; and has no reason to care if the menus are being used from keyboard or mouse or joystick or handwriting system in "touchpad mode".

onlyonemac wrote:

The graphics editor allows the user to draw with a pointing device and provides toolboxes for frequently-used tools and a menubar for all the available options, but users can also draw with a directional navigation device (there are many ways this could be done - you could have a path-based system whereby the user can enter and modify nodes in a drawing path before confirming the path to have it applied, or you could have a plotting-style system for straight lines, or whatever else that allows mostly freeform shapes to be drawn with a directional device) and again they could navigate the menus with the directional device; using a text input device for a graphics editor could prove a little tricky and I'd like to see you suggest some way of doing this that isn't just using a handwriting recognition system as a pointing device.

You've just described a graphics editor using a single front-end for all input devices; that has no reason to care if the commands came from keyboard shortcuts or were handwrittten or were entered via. menus; and has no reason to care if the menus are being used from keyboard or mouse or joystick or handwriting system in "touchpad mode"; and has no reason to care if the pointer is being controlled by a mouse or touchpad or cursor keys or anything else.

For speech recognition you could use coordinates (e.g. user says "coords 123, 456") to emulate a pointing device. If you deliberately cripple a handwriting system to be a lot less inefficient by denying the use of "touch pad" capabilities that must exist by definition (to allow the user to enter handwriting); then it could also use the same coordinate system to emulate a pointing device (e.g. user writes "coords 123, 456').

onlyonemac wrote:

As we can see, once we group input devices into categories with similar characteristics it is easy to create interfaces that are optimised for the characteristics of each group of input devices and allow users to get the full benefit from their particular input device (whether they're using that input device out of choice or necessity is irrelevant); it just requires the application/frontend to be aware of what input device is used and present an appropriate interface.

Except, you had the same interface for all input devices instead; and failed to provide any reason why the application/front-end would care if (e.g.) text came from a keyboard or a virtual keyboard/mouse, or....

Cheers,

Brendan

_________________For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.

If there's more than 26 frequently used commands the application is probably a hideous mess that needs redesigning; and for rarely used things nobody is going to remember them and the user will end up using other things (main menu, context sensitive menu, toolbar, whatever) regardless.

My firefox has more than 26 commands and it's not a hideous mess because I have, at some point in time, used all of them.

Brendan wrote:

There's many different ways that a handwriting recognition system could be designed which work with a "front end doesn't care what the input device is" system; and also many different ways that a handwriting recognition system could be designed that don't work in conjunction with the "front end doesn't care what the input device is" system. The existence of the latter doesn't prove the former is impossible or undesirable.

No, but the fact that the former doesn't allow for an interface optimised for a handwriting recognition system and the latter does allow for an interface optimised for a handwriting recognition system certainly proves that the latter is more desirable than the former.

Brendan wrote:

I am amazed at your ability to stretch reality into a twisted pile of nonsense. My keyboard is 2 completely separate input devices, one that does capital letters (unless I hold down the shift key) and another that does lower case letters (unless I hold down the shift key). It's not one device with 2 modes at all.

I don't know if you're being sarcastic or not, but there are two ways to interpret that statement and as I don't know which one is the case I'll give my response to both:

Response 1:

I never said that your keyboard is two completely separate input devices; both of those fall under one device, just sending either an uppercase or a lowercase character event depending on the shift key and caps-lock key states.

Response 2:

I never said that your keyboard is one device with two modes (at least in the situation that you're describing); both of those are the same mode - specifically, "text input" mode - and are just different ways that the user has to choose which character event they want to send.

Brendan wrote:

Any pointing device (and therefore any directional navigation device), in conjunction with an "on screen virtual keyboard", can emulate a text input device (and that this is extremely common - e.g. smartphones, tablets)

Any text input device can have a method or mode where it emulates a pointing device or directional navigation device (even if this means using words like "up" to represent movement)

An input device emulating a different category may be worse than a device intended for that category; but an input device emulating a different category is better than nothing when no other device exists.

The combinations above imply that all categories of input devices are able to emulate all other categories of input devices; which means that all input devices are able to generate all events to send to a front-end.

Each category of input devices has a small set of events intended for it's native "no emulation". These events can be combined into a super-set of events for all input devices

The super-set events would have categories too - e.g. "pointing device events", "commands", and "literal characters/text"; but (partly because all input devices are able to generate all events anyway) there's no reason for a front-end to care what the input device/s actually are, or (when there are multiple different input devices being used simultaneously) which input device sent an event.

All quite true, but we should still try to optimise our interface for the real device in use rather than forcing users to use an emulation with an interface designed for a different device.

Brendan wrote:

You've just described a word-processor using a single front-end for all input devices; that has no need to care if (e.g.) text came from a keyboard or a pointing device (using an on screen keyboard) or handwriting recognition system; and has no reason to care if the commands came from keyboard shortcuts or were handwrittten or were entered via. menus; and has no reason to care if the menus are being used from keyboard or mouse or joystick or handwriting system in "touchpad mode".

It's not a single frontend. If a text input device is being used, it needs to provide keyboard shortcuts. If a pointing device is used, it needs to provide menus and toolbars designed for a pointing device. If a directional navigation device is being used, it needs to provide menus designed for a directional navigation device. Especially note the last two: the former needs frequently-used commands to be easy to reach with the pointing device i.e. toolbars, and all the commands laid out logically in hierarchical menus; the latter needs all commands in the same menus (as the idea of toolbars doesn't work well with a directional navigation device) but the menus can't be too long and complicated otherwise navigating them will be slow.

Brendan wrote:

You've just described a graphics editor using a single front-end for all input devices; that has no reason to care if the commands came from keyboard shortcuts or were handwrittten or were entered via. menus; and has no reason to care if the menus are being used from keyboard or mouse or joystick or handwriting system in "touchpad mode"; and has no reason to care if the pointer is being controlled by a mouse or touchpad or cursor keys or anything else.

The graphics editor does need to know what input device is being used. If a directional navigation device is used, it needs to allow for the fact that the cursor can only be moved in straight lines at a fixed velocity (or a velocity that follows some predefined curve, but isn't controlled directly by the user because the hardware has only an on/off input) and provide some way for the user to still be able to draw curves and freeform shapes, whereas if a pointing device is being used then those features aren't needed and will get in the way of the user's use of the program. If a pointing device is being used then it needs to display a toolbox so that the user can quickly get to common drawing tools, but if a pointing device is absent then the toolbox isn't going to be that useful and will just waste screen space and the application should provide a more appropriate way of getting to common tools such as through the use of gestures which would in turn interfere with the use of the application if a pointing device is present and being used.

Brendan wrote:

handwriting system in "touchpad mode"

I've already told you that there is no such thing. It's called a "touchpad" and it's a completely separate device (as far as the OS is concerned) to a handwriting system even if it uses the same physical hardware. Getting that right in your head will go a long way to helping you to understand my posts.

_________________When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

If there's more than 26 frequently used commands the application is probably a hideous mess that needs redesigning; and for rarely used things nobody is going to remember them and the user will end up using other things (main menu, context sensitive menu, toolbar, whatever) regardless.

My firefox has more than 26 commands and it's not a hideous mess because I have, at some point in time, used all of them.

Really? I don't believe you. Save, open, settings/properties, exit, print, back, forward, refresh, home, one to create a bookmark, and one to open bookmarks (plus maybe an "about" and a "help" that nobody cares about and probably don't need commands).

onlyonemac wrote:

Brendan wrote:

There's many different ways that a handwriting recognition system could be designed which work with a "front end doesn't care what the input device is" system; and also many different ways that a handwriting recognition system could be designed that don't work in conjunction with the "front end doesn't care what the input device is" system. The existence of the latter doesn't prove the former is impossible or undesirable.

No, but the fact that the former doesn't allow for an interface optimised for a handwriting recognition system and the latter does allow for an interface optimised for a handwriting recognition system certainly proves that the latter is more desirable than the former.

Given that you've repeatedly failed to show how any interface for any application/front-end could/would/should be optimised for any different input devices; who cares if the latter allows for an interface optimised for a handwriting recognition system in theory if anyone ever has a reason to waste their time implementing one? The latter only complicates things for front-end developers (and things like widget services) and complicates things for users (especially when they're using 2 or more input devices) for no sane reason whatsoever; and avoiding the need for pointless complications (the former) is far superior in every possible way.

onlyonemac wrote:

Brendan wrote:

I am amazed at your ability to stretch reality into a twisted pile of nonsense. My keyboard is 2 completely separate input devices, one that does capital letters (unless I hold down the shift key) and another that does lower case letters (unless I hold down the shift key). It's not one device with 2 modes at all.

I don't know if you're being sarcastic or not, but there are two ways to interpret that statement and as I don't know which one is the case I'll give my response to both:

Response 1:

I never said that your keyboard is two completely separate input devices; both of those fall under one device, just sending either an uppercase or a lowercase character event depending on the shift key and caps-lock key states.

Response 2:

I never said that your keyboard is one device with two modes (at least in the situation that you're describing); both of those are the same mode - specifically, "text input" mode - and are just different ways that the user has to choose which character event they want to send.

Obviously I'm using sarcasm to point out that "handwriting recognition in touch-pad mode is a completely separate device" is extremely idiotic (as idiotic as saying a keyboard is 2 completely separate devices because it also has 2 modes).

onlyonemac wrote:

Brendan wrote:

Any pointing device (and therefore any directional navigation device), in conjunction with an "on screen virtual keyboard", can emulate a text input device (and that this is extremely common - e.g. smartphones, tablets)

Any text input device can have a method or mode where it emulates a pointing device or directional navigation device (even if this means using words like "up" to represent movement)

An input device emulating a different category may be worse than a device intended for that category; but an input device emulating a different category is better than nothing when no other device exists.

The combinations above imply that all categories of input devices are able to emulate all other categories of input devices; which means that all input devices are able to generate all events to send to a front-end.

Each category of input devices has a small set of events intended for it's native "no emulation". These events can be combined into a super-set of events for all input devices

The super-set events would have categories too - e.g. "pointing device events", "commands", and "literal characters/text"; but (partly because all input devices are able to generate all events anyway) there's no reason for a front-end to care what the input device/s actually are, or (when there are multiple different input devices being used simultaneously) which input device sent an event.

All quite true, but we should still try to optimise our interface for the real device in use rather than forcing users to use an emulation with an interface designed for a different device.

"Forcing users to use an emulation with an interface designed for a different device" is a massive exaggeration. "Allowing users to use an emulation (when they have no better input device) with an interface designed for all input devices" is much more accurate.

onlyonemac wrote:

Brendan wrote:

You've just described a word-processor using a single front-end for all input devices; that has no need to care if (e.g.) text came from a keyboard or a pointing device (using an on screen keyboard) or handwriting recognition system; and has no reason to care if the commands came from keyboard shortcuts or were handwrittten or were entered via. menus; and has no reason to care if the menus are being used from keyboard or mouse or joystick or handwriting system in "touchpad mode".

It's not a single frontend. If a text input device is being used, it needs to provide keyboard shortcuts. If a pointing device is used, it needs to provide menus and toolbars designed for a pointing device. If a directional navigation device is being used, it needs to provide menus designed for a directional navigation device. Especially note the last two: the former needs frequently-used commands to be easy to reach with the pointing device i.e. toolbars, and all the commands laid out logically in hierarchical menus; the latter needs all commands in the same menus (as the idea of toolbars doesn't work well with a directional navigation device) but the menus can't be too long and complicated otherwise navigating them will be slow.

So it's a single front-end that provides menus (for all input devices) and commands (for keyboard shortcuts, handwriting shortcuts, speech shortcuts); where the menus need to be laid out logically (with frequently used commands easier to access) for all input devices; and where a toolbar is provided as an alternative way to access commands (for all input devices).

onlyonemac wrote:

Brendan wrote:

You've just described a graphics editor using a single front-end for all input devices; that has no reason to care if the commands came from keyboard shortcuts or were handwrittten or were entered via. menus; and has no reason to care if the menus are being used from keyboard or mouse or joystick or handwriting system in "touchpad mode"; and has no reason to care if the pointer is being controlled by a mouse or touchpad or cursor keys or anything else.

The graphics editor does need to know what input device is being used. If a directional navigation device is used, it needs to allow for the fact that the cursor can only be moved in straight lines at a fixed velocity (or a velocity that follows some predefined curve, but isn't controlled directly by the user because the hardware has only an on/off input) and provide some way for the user to still be able to draw curves and freeform shapes, whereas if a pointing device is being used then those features aren't needed and will get in the way of the user's use of the program. If a pointing device is being used then it needs to display a toolbox so that the user can quickly get to common drawing tools, but if a pointing device is absent then the toolbox isn't going to be that useful and will just waste screen space and the application should provide a more appropriate way of getting to common tools such as through the use of gestures which would in turn interfere with the use of the application if a pointing device is present and being used.

For all input devices, you'd want to provide a "use control points to create a spline/curve" feature (where the user can adjust the control points until they're happy with it before anything is actually drawn) because it's difficult to get a curve right when drawing freehand (with no way to adjust it after) even for the best possible input device (high-end graphics tablet). This is something that is provided in all decent image editors. All decent image editors also allow you to show/hide various things (toolbars, menus, grids, rulers, status bar, etc).

onlyonemac wrote:

Brendan wrote:

handwriting system in "touchpad mode"

I've already told you that there is no such thing. It's called a "touchpad" and it's a completely separate device (as far as the OS is concerned) to a handwriting system even if it uses the same physical hardware. Getting that right in your head will go a long way to helping you to understand my posts.

I simply don't have access to the type and quantity of drugs necessary to get insane nonsense "right in my head".

If I say something that's extremely stupid, like "onlyonemac doesn't have arms and legs but has 20 tentacles instead" does it magically become fact?

If you say something that's extremely stupid, like "a graphics tablet, touchpad or touch screen suddenly transforms into a completely separate device when you add a handwriting recognition mode to it" why do you think that magically becomes a fact?

Cheers,

Brendan

_________________For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.

I am astonished that this thread still exists, even if it has drifted through several topics on the way.

BTW, why are the two of you tying HWR to the devices at all? Every HWR and OCR system I've ever seen - whether it is in online or offline operation, regardless of the type of device or purpose of the software (so yes, this applies to things like Inkwell and Graffiti) - digitization of the input is a separate step from recognition; AFAIK, real-time recognizers are actually snapshotting the digitized input at an interval (say, 0.5 millisecond) and updating the match on the fly (that is, each pass still has the data structures from the previous pass available and simply deltas the changes into them), but they are still basically using a bitmapped image rather than the raw input. I suppose that some RT recognizers may have additional information from the device such as pressure and contact time, but that's in addition to the image it is working. IOW, from the perspective of most HWR software, the data source is irrelevant since it isn't working from the hardware in any case.

Handwriting movement analysis is a different matter, but you can't do that effectively with just a tablet - a full analysis has to include the hand motions, not just the stylus contact with the input pad or screen - and is mainly used in medical diagnosis of neuromotor conditions such as Parkinson's disease, so it really doesn't apply here. I bring this up only because I get the impression that one or both of you think that HMA == HWR, or that HWR requires HMA, when in fact they are unrelated.

_________________Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTFμή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίανLisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.

I realise this is an old thread but, now that I have an external monitor (and this thread has calmed), I'd like to say something related to the original topic of this thread.

I've recently got a second-hand LCD monitor (I wanted 4:3 or similar and I didn't want that "blue backlight"). Its native resolution is 1280x1024, but its EDID doesn't seem to work (probed modes were up to 1024x768). I had to use xrandr to add a new video mode for choosing.

I wouldn't however be able to imagine that the OS wouldn't allow me to specify any additional video modes. I know Brendan has got himself banned and isn't able to reply here, but I'll post it anyway.

Who is online

Users browsing this forum: Bing [Bot] and 12 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum