first:
some small system to designate roles to scanners.
say you have a general purpose scanner and a detail scanner.
the general purpose scanner would run all the time and the detail scanner should only be used for "active scanning" similar to other games do with hard differentiated scanners.
so the general purpose scanner would get either no role or some "general purpose" role, would be active pretty much all the time.
the detail scanner would get some "active scan" role and be generally offline, but upon a button press ("scan that ship") it gets powered up, performs the scan thats asked from it, and powers down again.

similar controls could be included for long range pings, weapons guidiance radars, etc.
binding them to roles and triggering the roles instead of explicitly binding the scanners to keys.

second:
would frequency dependent frequency resolution be interesting?
so the frequency resolution wouldnt be equal everywhere but be segmented into high precision and low precision parts
illustration:

the red curve is the actual signal, the blue curve is what the sensor can tell you about the signal.
the maximum resolution is in the same area as the absolute value peak.

just some idea that was going round my head

third:
similar "just a thought":
would variable integration time for sensors be interesting?
explanation: "integration time" = time a sensor accumulates light/radiation/etc before reading out. (exposure time for photography people)
a sensor with long integration time has a very low "frame rate", it cant produce many measurements per unit of time, so its badly used on close/moving objects and doesnt give precise data on them, and the snapshots only come every few seconds, but data on stationary and/or faint objects is much better, because it accumulates more radiation to analyse per frame.
they would also be naturally inclined to be "active scanners" (in general gamer speak), as they give very good data, but the target and you have to hold still.

short integration time sensors give better data on moving objects and more snapshots, but have lower overall quality on stationary/faint objects.

long integration time: better under ideal conditions, for stationary scans and long range
short integration time: better under moving/non ideal conditions, for moving/combat/general usage

fourth:
i assume for this last part that wrecks are created quite often and dont look that different from intact ships visually.
so, the thing: i think it would be nice to be able to "disguise" yourself as a wreck by turning down your emissions and "play dead".
and only show up as a dead hull+equipment on active scanners, because your ship isnt more than that and has no (or very low) own emissions.
just a thought for emergencies (make them believe that you are wrecked/dead), ambushes or plain hiding.

How does idea 1 function in game? Does the general purpose scanner only let you know that there is something within a certain radius of your ship? Does the active scan then function as the player targeting a ship and getting shield, weapon, system, and cargo status? This would include the ship's name and if it is hostile or not based on IFF data.

I'm not sure I fully understand what you are proposing in your second idea. If you are suggesting making the scanner in LT even finer than it already is, I might be against that idea. Sell me on why this would work well in LT.

I like the concept of the 3rd idea. Long exposures looking out at a far distance will easily pickup any slow moving object outside of sensor range. However, if used on nearby ships all you'd get would be a blurred image with useless data. So doing the same with different types of sensors could add flavor to LT.

The fourth idea could be a dangerous one to implement. Remember that other AI in LT are going to use every feature that gets coded into the game. So here you'd have to consider not just how the AI would treat you but how you'd like the same thing done to yourself. If you want to be able to ambush NPCs and are willing to be ambushed yourself then it's probably not a bad idea.

BFett wrote:How does idea 1 function in game? Does the general purpose scanner only let you know that there is something within a certain radius of your ship? Does the active scan then function as the player targeting a ship and getting shield, weapon, system, and cargo status? This would include the ship's name and if it is hostile or not based on IFF data.

you did read the other walls of text i wrote in this thread, no?

all scanners are basically "equal", but good at different things.

the "general purpose" scanner gives you reasonable data over reasonable distances with fast exposure to generate the "usual" space game sensor.
gives you position, speed, approximate ship type and, when the target is cooperating, everything the IFF provides.
not including name or faction status without further database access, because, how should the scanner know?
for general gameplay functionality it should also provide hull and shield strength estimates.

the special purpose scanners are, as my labeling suggests, for things beyond the classic information.
they are long exposure sensors which give you additional data compared to the general purpose scanner, but couldnt generate a reliable "normal" view on their own.
they give rough longer range data at the cost of producing only a single image every thirty seconds
OR (depending on the actual variant)
produce a "detailed scan" at close range (with similar time constraints, say once every 30 seconds) which gives higher detailed data about ships
cargo contents, equipment details, energy distribution etc.

its not that the general purpose scanner couldnt provide that data on its own, but its simply not designed for that.
it trades the detail capability against things that are more useful on average, reasonable range, reasonable info, fast aquisition.
you could probably design a general purpose scanner which approaches the detail abilities of a (cheap) detail scanner,
but it wont be as cheap to install and run, or small, or fast to produce the scans as the specialised one.

BFett wrote:
I'm not sure I fully understand what you are proposing in your second idea. If you are suggesting making the scanner in LT even finer than it already is, I might be against that idea. Sell me on why this would work well in LT.

im not saying that the scanner gets finer in general.
im saying that some scanners may have other areas of maximal detail.

so theres maybe a scanner which has a hundred frequency bars in the 10-100kHz range, but only one readout for the whole 1-10GHz range.
and another one the other way around.

the first scanner would, for example, be ideal to differentiate between different kinds of ore, the other would be good in differentiating drive systems.

not that we make all the sensor data have a billion individual readout bars, but that different sensors have different densities of readouts in different frequency ranges.

it would also be silly to demand more readout bars, as i have no idea how many there are and if thats enough, too many, too few.
thats a balancing constant, not a design constant.

BFett wrote:
I like the concept of the 3rd idea. Long exposures looking out at a far distance will easily pickup any slow moving object outside of sensor range. However, if used on nearby ships all you'd get would be a blurred image with useless data. So doing the same with different types of sensors could add flavor to LT.

i'd more say relative velocity / angular velocity, and not near/far as separator.
its just that near/far tends to coincide with angular velocity

BFett wrote:
The fourth idea could be a dangerous one to implement. Remember that other AI in LT are going to use every feature that gets coded into the game. So here you'd have to consider not just how the AI would treat you but how you'd like the same thing done to yourself. If you want to be able to ambush NPCs and are willing to be ambushed yourself then it's probably not a bad idea.

then you'd have to treat debris fields with care, because raiders and hostile scavengers could be hiding in them.
perfect

The idea of ships being able to pretend they're dead by masking or shutting down their emissions has always seemed worth exploring. One of the effects of there being some things that are not always as they seem to be is to encourage more thoughtful, deliberative play as opposed to rush in and start shooting play. Those who like the shooty play can still get it... but sneaky situations enable another enjoyable kind of fun.

Regarding the integrative mode, am I misremembering, or wasn't that the original form of the scanner that Josh showed us?

Flatfingers wrote:
Regarding the integrative mode, am I misremembering, or wasn't that the original form of the scanner that Josh showed us?

Not really.

The mk1 scanner generated a historical view, but didnt integrate.

Integrating would be if the sensor would sum up all radiation over a certain amount of time (say 30 seconds) and then analyse that sum image (without time information) over that time snippet.
So an integrating sensor would produce 1 image per time snippet.

You were a bit into photography, flat, no?
Take your camera for example, with long(er) exposure times you take longer to make an image, but you can get clear(er) pictures of dark scenes without getting an actually more sensitive/better image sensor.

Same with long integration time scanners, they put the equivalent of a thousand images in terms of light on top of each other and analyse that sum instead of the individual images.

That "integration" is really what a CCD does: each cell just counts photons over a given time period. Then it declares itself "on" if that count exceeds a set value.

A sensor array in LT could work similarly, with simulated "cells" counting visual messages received within particular frequency bands over some set amount of time, and representing that count as a bar.

It's sort of funny to think about visual data in a game in that way, though. Does that mean that every in-game object has to be coded such that emissions and reflections are really just data packets created to be detected by any ship's sensor within the sphere of detectability?

Let's suppose the universe consists of a star and a sensor mounted on a moveable ship. The Star object contains both data explaining how to display itself visually in the game, as well as an Emission data structure. This structure describes the type and number of radiation_elements emitted per unit of time, where a radiation_element consists of a frequency.

Meanwhile, a Sensor object includes methods that check the volume of space around the Sensor object's current location for any objects within the Sensor's maximum range. For all that are, it iterates through each one's Emission and Reflection data structures. For each such structure, the Sensor's detector method applies the inverse-cube law to any radiation_element within each frequency band currently active for that sensor. It can then display visually its current count of detected emissions or reflctions within each of its currently active frequency bands.

Seen in this way, I'm starting to convince myself that an "integrative" mode would be nothing more than the regular mode -- just counting detections of radiation_elements over time -- only held for a few seconds or more instead of being updated every 0.1 seconds or so. Basically, your ship's sensor array would have a dial: turn it counter-clockwise, it looks like the rapidly-updating frequency scanner Josh showed most recently. And the more you turn the dial clockwise, the more time each of the sensor's frequency bands counts up the radiation_elements emitted or reflected by a nearby object.

Cranking the dial all the way clockwise would just set every band to 100% for a nearby "loud" object like a Star. But it would be an effective way to get a "picture" of a distant or quiet emission or reflection.

Does this sound close to plausible in its major aspects to you? Or are you thinking of something different? (Other comments also welcome!)

Flatfingers wrote:
That "integration" is really what a CCD does: each cell just counts photons over a given time period. Then it declares itself "on" if that count exceeds a set value.

Thats only half correct afaik.
They measure the charge thats built up over the set time, so they know how much light shone on the sensor over that time.
and that gives you how bright the respective pixel is.
pictures would look really crap if pixels only knew on or off.

On the rest of your text: sounds interesting, but i wonder what it would give us more than a gamified version?
Consisting out of data about an object and its fidelity.

Counting packets would probably create gigantic overhead in terms of memory and calculations.

So for "non integrating" sensors i'd simply say they have 0 delay and give real time data for all intents and purposes, without any counting but directly the emission strenght

And for integrating sensors an image every x seconds, giving the same readouts as the non integrating one.
With modifiers on fidelity based on relative movement.

In some edge cases it could provide different results, like ships moving in and out of detection range, but i think that can be mostly ignored or approximated by interpolating between state when scan started and state when scan ended.

what would a packet based system give that would make the overhead worthwile?

Other than consistency with an existing object-based message-passing code structure, I can't think of any advantage to a more heavyweight model for handling emissions.

Which probably means it'll be a message-passing system. I have no worries that whatever Josh implements will do the job and be performant. I just doubt my ability to understand any of it once everything has been consistentized to a fine paste!

i fleshed that out in a recent burst of creativity in the shallow space forums linky
which built mostly on the ideas i wrote about on the first page of this thread.

i got some proper mechanics/math together to outline the mechanics around angular resolution.

so, every sensor has a stat called "angular resolution".
this resolution is how many angular subdivisions a sensor can differentiate.
it is expressed as an angle which is the smallest cone the sensor can resolve.

ships have two factors which influence their detectability in terms of angular resolution.
their physical hull size, which is expressed in meters and, obviously, expresses how large the hull is
this is a factor only dependent on the size of the ships model and not on other modifiers, for reasons i'll explain further down.

the other factor is the "diffusion" factor which expresses how easy to lock on the ship is.
it is a dimensionless number >0 and modifies the effective resolved lenght of a scanner against the ship.

so, now that the variables are defined, how do they interact with each other?

a scanner can resolve only down to a certain lenght at a given distance, this lenght gets larger the further away and the larger the resolved angle is (the lower the resolution).
so the closer the target is, the more precise the sensor works.

this lenght is the smallest uncertainity a scanner can put on an object at that distance.
the uncertainity expresses itself in a (spherical) cloud which contains the ships position.
the larger the uncertainity, the larger the cloud.
when a cloud includes another ship, the two clouds merge into one bigger cloud.
smaller ships hiding in bigger ships' clouds stay "merged" with them until their hulls arent in the bigger ships cloud.
enabling fighters to hide in a bigger ships sensor shadow for a while.

the uncertainity cloud has to become smaller than the hull size of the ship to detect to "pinpoint" it and for the ships position to be perfectly known.
(And heres also the reason why the hull size should always equal the physical model of the ship, if the uncertainity cloud is smaller than the model of the ship it doesnt matter if it is pinpointed or not, a shot into the cloud will hit it)
maybe some "hard to target" effects could come into play in the border regions of that condition (Rd= 1.2-0.8 H), with target predictions imprecise and similar.

the diffusion modifier increases the uncertainity multiplicatively, the larger the modifier the larger the uncertainity bubble.

Cornflakes_91 wrote:a small ship with a very big very bright very hot reactor is less visible than a bigger, unpowered, chunk of metal?
if it isnt, which implies that you cant differentiate between a rock and a ship, are all asteroids in range marked the same way until they respond to an identification ping?
if the answer is no theres also no effective signature management by turning off/down your systems.

You could build your medium sized ship in a way, where the core emits almost no energy waves and the hull of the ship is made of a specific material/shape/technologie which reduces sensor visibility. (Also reduce power core output to improve stealth.)
That way its stealth value would go up and it would be less visible than a generic fighter.

i'd say to just use a generalised emissions based system and leave an explicit "stealth rating" system out
if it has less emissions, you detect it at shorter ranges.

theres a reason for all kinds of sensors to exist.
long(er) range sensors can only detect larg(er) objects

so theres really long range sensors at reasonable sizes, but they only see barn broadsides
all-around sensors for medium size objects but a lot shorter range
short range point defense sensors which arent any good for anything but missile defence and similar close situations