A number of scientific data sets for planetary textures are released in form of smaller data tiles (e.g. Mars,...). It seems to me that an important extension of the present F-TexTools set could be a flexible tool for merging the tiles together into ONE texture.

A number of scientific data sets for planetary textures are released in form of smaller data tiles (e.g. Mars,...). It seems to me that an important extension of the present F-TexTools set could be a flexible tool for merging the tiles together into ONE texture...

That would be gorgeous!

Also, as pointed by DW, in one year or so, we should try not to forget the new moon data from the Japanese mission...

I've been having a hard time trying to find a suitable set of Mars HRSC tiles to download.
Somehow the official data is organized into "orbit numbers", which do not appear to map to geographical coordinates. I've also tried the HRSC online image viewer (which does provide coordinate-referenced data) but it doesn't seem to let me batch download say, all tiles.

So maybe an improvement for the texture tools might be say, a list of download urls listed in the documentation, or some sort of script that offers up a regularly updated library of urls (hosted on the CM server) so that the user can download image data more conveniently.

This isn't exactly a feature request for F-TexTools or nmtools per se but..
Wouldn't it be nice if the nvidia texture tools (which convert png etc -> dds) also accepted stdin input so the output from F-TexTools could just be piped in? This way, we could have raw data -> pow2/half -> tiles -> dds all in one go. Sounds good, no?

This isn't exactly a feature request for F-TexTools or nmtools per se but..Wouldn't it be nice if the nvidia texture tools (which convert png etc -> dds) also accepted stdin input so the output from F-TexTools could just be piped in? This way, we could have raw data -> pow2/half -> tiles -> dds all in one go. Sounds good, no?

DW,

oh yes, it sounds good and I have been thinking all along about this option . From my view, there were essentially 2 significant 'CON' arguments besides the many 'PROS' at this time:

1) without Ignacio C. implementing the STDIN/STDOUT option himself, we would have to do it, but then loose the "of the shelf" advantages wrto the nvidia-tools. Given the rapid development of the nvidia-tools, this seemed to me a bit early. So perhaps we can just convince Ignacio ...

2) Even if Ignacio implements STDIN/STDOUT redirection, we still need to pass to nvcompress > 2048 times /automatically/ the different tile names tx_i_j.dds. This is again easy, if we implement the nvidia code into the F-TexTools and the NmTools in form of a compress library. But if not, this looks problematic without using extensive (OS-dependent) shell scripting.

Do you have a workaround for this?
But in any case, I am all for discussing the PROS and CONS further!

2) Even if Ignacio implements STDIN/STDOUT redirection, we still need to pass to nvcompress > 2048 times /automatically/ the different tile names tx_i_j.dds.

Very good point.. it might be possible though to run nvcompress from say, within nmtiles by exec'ing nvcompress and establishing a pipe, for each tile. While this would require nmtiles/F-TexTools to know the path of the nvcompress tool and command line options (and even those could be specified with say, an environment variable etc), it would not require compiling in the nv tools as a library and would not require any shell scripting. This method would work even without modifying nvcompress to accept stdin (nmtiles writes a temporary png, nvcompress converts to dds, temporary png deleted by nmtiles).

2) Even if Ignacio implements STDIN/STDOUT redirection, we still need to pass to nvcompress > 2048 times /automatically/ the different tile names tx_i_j.dds.

Very good point.. it might be possible though to run nvcompress from say, within nmtiles by exec'ing nvcompress and establishing a pipe, for each tile. While this would require nmtiles/F-TexTools to know the path of the nvcompress tool and command line options (and even those could be specified with say, an environment variable etc), it would not require compiling in the nv tools as a library and would not require any shell scripting. This method would work even without modifying nvcompress to accept stdin (nmtiles writes a temporary png, nvcompress converts to dds, temporary png deleted by nmtiles).

Of course, that could be the elegant solution...except that we got to realize this in a cross-platform manner. Let's think more about this option. I really like it.

I was thinking it should be fairly easy to convert a bump map to a 16 (or 8 ) bit elevation map.
If you say that the average heigt of a map is gray value 128 which corresponds with an elevation of 0, it should be possible to calculate the corresponding values, where black (0) is -65536 and white is +65536.

After converting the bump map to a binary elevation map, you can convert this elevation map to a normal map with the nmtools.

This could be handy if you would like, for instance, convert an existing moon bump map or DEM to a normal map.

I was thinking it should be fairly easy to convert a bump map to a 16 (or 8 ) bit elevation map.If you say that the average heigt of a map is gray value 128 which corresponds with an elevation of 0, it should be possible to calculate the corresponding values, where black (0) is -65536 and white is +65536.

After converting the bump map to a binary elevation map, you can convert this elevation map to a normal map with the nmtools.

This could be handy if you would like, for instance, convert an existing moon bump map or DEM to a normal map.

Cap-Team,

your arguments are besides the point. The main challenge for normal maps is to be generated from VERY smooth elevation of bump maps. Only for "baby" resolutions like 2k (for which my tools are really a tremendous overkill), you could sensibly use 8bit grayscale bump or elevation maps as input. Of course I have a conversion tool, 'png2bin' (the inverse of bin2png), as part of my F-TexTools. So if you really don't believe me, use a grayscale .PNG elevation map, e.g. of Mars, convert it to 8bit bin format and calculate a normal map with my nmtools. But you will be disappointed, because of the noisyness.

You cannot cheat in the way you propose and make 16bit binary data from 8bit ones . The existing moon elevation maps (based on DATA) are ALL of pretty bad quality. So no matter what you do, they are NO fun.

DEM's you can print out without quality compromise into 16bit binary format with the help of ISIS3 tools. So they're fine.

* An application "pngsize" that returns the width of a PNG file as an error level (0 = error (too small, invalid aspect ratio or other error), 1=1024 .. 2047, 2=2048 .. 4095, etc). error level = int (log2 (width) - log2 (512))
* An application "png2pow2", which is like tx2pow2, except it works with png files. No need to specify width or height with this application because these are encoded in the PNG. It outputs the binary format.

If implemented, these will make it possible to write a universal script that can work with any size of PNG file instead of those that fall within a certain size range. At present, the weakness of the F-TexTools package is that the size of the source texture must be known in advance. If these are implemented, we can overcome this limitation. We can test the value of pngsize and then branch to the appropriate section of the script that deals with png files of that particular size.

sorry but the intention of the tools is not to come up with another alround set for general image manipulations. The tools are basically for handling HIGHEST quality binary raw format from the scientific archives.

The elegant pipe mechanism of my "toolbox" makes something like "png2pow2" entirely superfluous. Here is how you do it:

Let 'input.png' be a RGB .png texture of size 5400x2700, say.

Then you type at the prompt:

png2bin < input.png | tx2pow2 3 5400 | bin2png 3 4096 > output.png

output.png is a 4096x2048 .png texture with compression level 6.

And if you want to know the sizes beforehand, install ImageMagic and you can have a (terribly slow) utility (identify) that tells you everything about the texture concerned.

Here is what you would get without any options applied:
> identify input.png

If you want to read out the texture size in a script, 'identify' can do this, too. Except, for monster textures for which my tools are designed (!), it will take longer than all the rest of the job . I used 'identify' in my virtualtex script that MANY people have used in the past.

The shell must know the bit shift operator <<, however. Otherwise, you simply multiply by a factor of two each time...

Anyway, for such extremely /simple/ script tasks one would not want to code a /specialized/ compiled program, simultaneously for 3 operating systems ... .

Really I don't see why one has to spoil users to an extent that they don't need to know the initial size of their textures! Ignorance hurts . Without using 'identify', all you need to input to the script, is the original texture size. The rest can be automatically traced in the script. Also, everyone uses these script's basically only ONCE (in a while), not regularly... Hence typing in a number is not all that much of a pain ...

sorry but the intention of the tools is not to come up with another alround set for general image manipulations. The tools are basically for handling HIGHEST quality binary raw format from the scientific archives.

It doesn't matter what they are designed for. Often users find ways of applying software tools in ways the designers didn't expect. With just a little work, these tools can be incorporated into a script that can split up a texture of a fixed size into several levels of textures, all placed in the correct level directories, complete with a CLX file as well. If I can do that using the limited capabilities of a DOS script, it's not hard to do it in a shell script as well.

Now we don't really need to figure out the width of the image. Other tools can do this as well. But the F-TexTools do also require the users to know the number of bytes per pixel and this is not easy to find out.

t00fri wrote:

And if you want to know the sizes beforehand, install ImageMagic and you can have a (terribly slow) utility (identify) that tells you everything about the texture concerned.

Here is what you would get without any options applied:> identify input.png

If you want to read out the texture size in a script, 'identify' can do this, too. Except, for monster textures for which my tools are designed (!), it will take longer than all the rest of the job . I used 'identify' in my virtualtex script that MANY people have used in the past.

We don't need to install Imagemagick just to get access to the Identify tool.

We should be able to write a simple utility to interrogate the IHDR chunk in the PNG file to retrieve the height and width. We also need to include the capability to retrieve the colour type as well and convert it to the "channels" required in many places in the suite. Knowing the colour type is crucial to the correct operation of the F-TexTools suite but unless I missed something in the documentation at present the only way to determine the colour type using the F-TexTools suite is by trial and error. And when you're working with humungous texture files that take a while to process, users may get frustrated if they need to try it more than once. In fairness, there are likely to be only two values to try: "3" and "4".

If such a utility is provided - let's call it "pnginfo" - it would not use the arcane conventions of "Identify", but instead can be tailored to the conventions of F-TexTools. It needs to output height, width and bytes per pixel to stdout.

Users can then cut-and-paste the information into other parts of the script. Instead of guessing the correct values, they can call pnginfo and get that info directly.

(I forget the parameters for the cut command but hopefully the intention is clear)

Or, if you do use a good shell script that supports variables, we can assign to variables the width and number of channels, and reference the variables instead of using `pnginfo input.png | cut ...` each time.

t00fri wrote:

Really I don't see why one has to spoil users to an extent that they don't need to know the initial size of their textures!

But what if the users don't know the initial size, or, more to the point, the number of channels used by the texture? While "identify" provides the width of the texture in an obvious format, I doubt many people would be able to use it to work out how many bytes are used per pixel. The one weakness of the F-TexTools script is the need to pass the number of channels to most of the commands. How do we work out this number from a PNG file?

I can manage without needing to have the size of the PNG value output; after all I can read the size of the PNG file from the IHDR chunk in the PNG file itself using a hex editor if I need to. However, figuring out the correct values for bytes per pixel is a bit more arcane and it may be helpful to provide a simple utility to determine this, or at least document briefly how this value is calculated from the Bit depth and Color type bytes.

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum