I am trying to implement an occlusion culling system for a tile-based game we are working on. Basically I have a 3D grid of tiles. Every tile has an index and a list of indices of other tiles. Only tiles that are visible to the player when he is standing on the source tile should be in the list. My question is how to create this list, ie. determine if a given object gets drawn to the screen when looking at it from a given position. The data is currently created at "design time" ie offline, but something fast enough to run on the device (mobile) at runtime would be better, since it would work with procedural levels.

I have a system in place which sort of works. For every tile in the level I "look at" every other tile and shoot a number of rays randomly in roughly the direction of the target tile. If any of the rays hit the tile then that tile is considered visible. I gain pretty good results this way, but it tends to be very slow since I need to cast around 200 rays per tile. Also, objects that occupy only a few pixels on the screen almost never get hit, so there are a few holes in my data.

I had the idea of placing a camera at the tile and making it look at the target tile. I would then render the scene with a special shader that would take a texture and the tile index as input. If the pixel shader would run (if the object is drawn) then it would write the tile index to the texture. My guess is that this would only work if I sort the objects front-to-back before drawing them. Otherwise the PS would run for a number of tiles even if they are overdrawn. Also, I am using Unity and I am not sure how easy this is to do in that engine.

Let me see if I understand. You are testing what the player can see, and only want to render what the player sees? If you are using 3d tiles, I'm imagining your grid is really more like a cube? You might do a frustum check, that would limit the raycasts to only the tiles\cubes in the frustum. You might also be able to do some form of Bresenham line, adapted to 3D, instead of a raycast. (Or Wu's line if you allow the player to be non-centered on a tile)

I had the idea of placing a camera at the tile and making it look at the target tile. I would then render the scene with a special shader that would take a texture and the tile index as input. If the pixel shader would run (if the object is drawn) then it would write the tile index to the texture. My guess is that this would only work if I sort the objects front-to-back before drawing them. Otherwise the PS would run for a number of tiles even if they are overdrawn. Also, I am using Unity and I am not sure how easy this is to do in that engine.

Or you can just use an Occlusion Query (if Unity support those), which were made to solve this problem. A potential problem with occlusion queries is that they cna have a lot of latency; that is, sitting and waiting for the result can leave the whole computer sitting there idle. You may find it more useful to use an asynchronous model such that the query you do on Frame X doesn't take effect until the rendering on frame X+Y. The player is not going to notice a couple frames of lag in visibility updating; you could even just do the visibility checks every 1/2 of a second in many types of games without the player noticing or caring.

If Unity doesn't support Occlusion Queries, your suggestion is a good approximation of them.