For help with anything that CEGUI doesn't offer straight out-of-the-box, e.g.:- Implementation of new features, such as new Core classes, widgets, WindowRenderers, etc. ...- Modification of any existing features for specific purposes- Integration of CEGUI in new engines or frameworks and writing of new plugins (Renderer, Parser, ...) or modules

I have a question related to fonts and scaling. I looked at other postings and could not find anything to answer my question. What i want to be able to do is quickly scale a font as you would an image as opposed to creating multiple fonts (all the same except different sizes). I have noted that there is a "size" parameter that will enlarge or shrink the font but then you have to recreate the new font every time you want a small change in size.

So let me give an example. I created a new Button type that simply just an image. As the button area changes, the image scales accurately based on the area. I have a font defined that shows up in the same size no matter what changes i do to the area. I have played around with the autoScaled property but it is not what i am looking for. I would like the font to scale as the area scales. I just don't know if that is possible or how it could be done. In this example, i want "Quit" to be scaled a bit larger than the "Play" since the area is larger. Does that make sense? i didn't want to re-create a new font just to do this. Any suggestions?

Also, i plan to also create an animation that will scale a button and i would like the text to scale as well. So scaling the text is a feature i would like to have.

This is not possible due to the way our Fonts are handled currently. Our font system isn't very flexible at this moment. The auto-scale is related to the size of your application window, so only if you change the size of the window of the entire application the font would change.

The Font system could probably be adjusted to support what you want but it would probably require more than just a couple hours of work, if done in a way that it can be merged into CEGUI source code. I am also not aware of a way to "hack" this quickly. The way i would work around this - if you really need it- is by making the not separately but make it become a part of the background image of your button. It will of course get blurry then if you size it bigger than the original image.

Thanks for the quick response Ident. I will consider creating the text as an image for a quick workaround for the fonts to scale. I think it is the quick way to move forward with what I want to do.

If you would suggest a long term solution, as per designing a way to merge into CEGUI allowing the scale of fonts, how would you recommend doing this at a high level? Would you still utilize freetype to raster an image of the font glyphs or something completely different? Say a different library or possibly some sort of vector approach. Glyphy is project that sends the vectors directly to the GPU. Cool idea, but only supports OpenGL ES2 renderer.

If a redesign for the font portion of CEGUI were to take place and continue to use freetype I would guess that the current "size" of the font would be irrelevant. I believe that instead of a font size, a transition to a consistent CEGUI spatial sizing mechanisms such as UDims and Areas, etc. I see two ways of implementing the rendering of the fonts where the scale would be better supported:

1) to create large glyphs once (from the ttf), then scale down accordingly. 2) to re-render a glyph (from the ttf using freetype) each time the size (udim/area) changes.

The problem with 1 is, as you said, scaling may cause blurriness if you are not careful. Also, could be costly for the memory to have these large glyph textures sitting in memory. (2) could have cpu performance issues. If you were animating (scaling) a window that has text that will grow as the window grow and shrink as the window shrinks then the glyph and image would be re-created each frame (multiple since multiple font letters). That would be expensive and I would imagine slow down the frame rate significantly. Not sure how much time freetype would take to do this but i would guess very timely.

I would love to try to contribute to this effort. I am not very familiar with CEGUI internal code nor freetype so it would take me some time to get up to speed on everything.

djreep81 wrote:If you would suggest a long term solution, as per designing a way to merge into CEGUI allowing the scale of fonts, how would you recommend doing this at a high level? Would you still utilize freetype to raster an image of the font glyphs or something completely different? Say a different library or possibly some sort of vector approach. Glyphy is project that sends the vectors directly to the GPU. Cool idea, but only supports OpenGL ES2 renderer.

I would definitely do it similar to the way it is already done. Doing it any other way is not feasible for real-time rendering. Vector graphics rendering is not a good idea on the GPU unless you don't utilise the GPU otherwise. So for a game you would never want to do that.

Currently we render every glyph of the font separately which decreases the readability of text a lot because the letters are spaced independently of what letter they are followed by or are preceded by. If you open a book and look at the letters you will noticed that the spaces change depending on the two letters that are next to each other. For more info see this: http://en.wikipedia.org/wiki/KerningThe correct way to solve this it to let the font library handle this. Freetype is capable of that in case there is a "kern" table, which is pretty much a deprecated table, but does not support GPos tables ( http://www.freetype.org/freetype2/docs/ ... step2.html )There is to ways to render this. Like i said we render glyph by glyph separately. I think we can keep that up and just position the letters differently according to Kerning (correct me if i am wrong). Other than that, each chunk of text that has to be rendered per line will require a separate rectangle as geometry and will require the line to be rendered fully into a texture every time. A separate texture for each line of text would lead to a lot of different textures required to be rendered per frame. I am not sure how this would affect the performance but it would be better to use a single texture for all lines. For that some sort of alignment and fitting algorithm would be required making this so complicated it goes beyond the scope of CEGUI. I would not assume Freetype does this for us. So solution for Kerning would be either 1.) use the old way of rendering each glyph separately but position them according to Kerning or 2.) render chunks of text (each line of text) in separate textures everytime the text is changed and use a rectangle geometry for each of those lines to be rendered.In either case, when using Kerning the "selection" size changes. We got elements like editbox which allow selecting text and the text selection will have to be adapted for that - i am not sure what changes are required though but it shouldnt be too hard to do this. The question regarding case 2.) in general is about the performance issue of redrawing the text. Usually a lot of text in applications is static. In games text might often change (counters), if this is done every frame then a texture would have to be changed every frame (or even multiple textures) and uploaded from local memory to GPU memory, which is not good at all. This would be fine only if done few times per second, but not every frame...

Another issue we have is regarding anti-aliasing. The way our Fonts are rendered doesnt use the r/g/b position of subpixels on the screen for better readability, like most operating systems and browsers do. This could be done in CEGUI but it requires a couple of changes because we apply colours to the glyphs which are just rendered black-and-white, so at severale points changes have to be made.

djreep81 wrote:1) to create large glyphs once (from the ttf), then scale down accordingly. 2) to re-render a glyph (from the ttf using freetype) each time the size (udim/area) changes.

Now to get back to your sizing issue. To ultimately solve this we would like you said require to render the glyph atlas from anew (given that we dont use the approach with one-texture-per-line and re-render-every-line-per-change) for every font size change that is required. Rendering everything in bigger sizes just once is not a good solution because the downscaling would mess up the anti-aliasing (which already isnt that great) and font readability is all about sharpness! You could try it out but I am pretty sure it would not look nice. For example render a large sized text in some font in Photoshop or Gimp or whatever, turn it from a font into a normal image/rasterize it, and then downsize it to a very small size on your screen (the smallest size that is still well readable for your eyes) with trilinear filtering (or bilinear, but not bicubic or something like that because GPUs dont have that). Create a new text in the same font but of similar font size to the one you just downsized and put it next to it. You will see the downsized one is kind of blurry on the edges and that makes it less readable, the newly created one at the target font size however should be well readable. Now, this is not the only problem. Most fonts change shape as they get bigger, for example so that they are still readable at small font sizes such as 10, 9 or 8. If you look closely you will notice that almost all good fonts do that. Some stay pixel-accurate by doing so. Now if you downsize font, you can't ever get this. 1) is therefore not an acceptable solution in my opinion. 2) Sounds better. However, if someone however has windows that change every frame and they have adaptive font sizes for the text inside them, well, then that would lead to the CPU->GPU texture upload problem i mentioned before and like you also mentioned in your post.

djreep81 wrote: Not sure how much time freetype would take to do this but i would guess very timely.

You could do a test but i am pretty sure that if you create the whole atlas of glyphs everytime this will lead to issues. An upload of the texture every frame doesn't sound like a good idea. You could try compression but that leads to other issues and is especially not a good idea with anything containing fonts.

djreep81 wrote:I would love to try to contribute to this effort. I am not very familiar with CEGUI internal code nor freetype so it would take me some time to get up to speed on everything.

We would love to see your contribution. I might have made wrong assumptions in my post so maybe you will find out something i said wouldnt work, could actually work. You would have to try it out. Additionally, i am sure that solutions for this problem can be found somehow, but it would be best to discuss them first so we are sure there are no theoretical issues, and then try it out to see there are no practical ones.

Great information. Ident, i think I'll have to read your response a few times

I believe i will start out with a few simple samples using freetype (+opengl) to generate glyphs/textures in different scenarios and measure performance of different methods. I will also checkout the valve font rendering paper, thanks for the tip!

I'll post back when i have some sample code to share. When I feel I have a viable agreed on solution, I'll try to integrate with CEGUI at that time. Most likely use github + cmake + linux platform trying to keep it dependency free

I just looked at the papers and i remmeber i had already looked at them once. THey are not really applicable for a GUI, they are for 3D rendering. Forget them.

Hm yea you could try to do something only based on OGL, but if i was you i would directly dive into our OGL renderer and look how fonts are rendered. Best use the default branch because I made changes to the renderer there that will stay. If you follow the process of Font rendering in our code step by step you will probably understand what is going on in an hour or two, and maybe you will get an idea of what to change pretty soon.

But if you wanna completely redo the process anyway then making a new project is prolly better like u suggested and then integrate it into cegui afterwards.