Orbiter Challenges >> VSA

Well, it is nothing I invented, and it is nothing brand new, either. While we had our VSA madness here on O-F, the french secretly developed a weapon of mass... entertainment:erm... no, I don't mean Dave Stewart... was just a picture on Google ;)... I mean this:(dead link)

THIS is what I think can really work in the end, in contrast to a VSA. It got

Competition (high scores)

Ranks (if you have enough points, you can fly more complex challenges)

Illusion of organization

Roles (pilots, challenge creators, administrators)

A fancy website

BUT! You guessed it already that I'll pull out a BUT, didn't you?

Despite it being a really really good idea, I think it is a bit too ambitious. Why I think so? Well, it has automatic setup (a kind of addon-installer), automatic challenge guidance (MFD with "waypoints") and automatic scoring (by sending game-safes to a server).

Sounds good at first, but if you think about it, you could quickly come to the conclusion, that a full auto mode here is a no-no .

OK, it may work with a small group of friends that don't feel the urge to kick up their score by plain cheating, but if you go public with such a system, it will soon start to be a security nightmare. People can go crazy for high scores...

In addition, the need for automatic verification of achieved goals is taking a burden on challenge creators to play by the rule-book of possible tests (planet distance, docking etc.). What if you want to make an UMMU transfer a challenge? You'd have to implement special tests in the MFD for that, and dynamically updating that checker would be a programming challenge of its own.

Nevertheless, the idea of Orbiter Challenges is what a good collaborative game-play in the Orbiter environment should reach for. My hat is off to tofitouf .

So, what would I make different, then? Ranting around is easy enough, you better have some proposals at hand if you do...

As I wrote, the full auto mode is what makes me uneasy, especially the scoring part of it. IMHO, there should be a credible validation system for challenges. An MFD sending save-states is not credible enough.

Another thing is the heavily centralized nature. There is one server, one list, one ivory tower.

Imagine that ivory tower breaks down... would be one hell of a mess, no?

So in order to not fall into those pits, I'd say keep the best parts of the Orbiter Challenges idea, but leave the hard parts out (full-auto and SPOF). This would then look like this:

Let's define the core unit in the concept: the challenge. The challenge is a small storyline that describes a manageable mission for one person. This would be along the lines of "start at wideawake and go into circular orbit". You could perhaps add "with a DG" or specify the time of scenario start and orbit inclination, too. It should just not be something like "build a space station" - we had that (OFSS is another nice example of successful collaborative game-play in Orbiter, IMHO). Would be too complex for one person, anyway...

Of course, there are some roles people can take.

Naturally, the first role taken would be that of a challenge creator. He/she would be the one to go creative and design challenges (look, it gets better if you use the right verbs.. design.. wow..). If it is a scenario, a complete addon, or just a bunch of text describing the challenge doesn't matter.

The obviously following role would be the one for the hotshots - pilot. The pilot flies the mission.

Now there is a role in this proposal that diverges a bit from tofitouf's original idea: the supervisor. This should be a person familiar with the challenge - ideally the creator itself. He/she gives the pilot a briefing, watches the mission, answers questions, help in case of mission-related troubles. Why should he do this? Well, someone has to score the mission afterwards, otherwise it is just flying a scenario. The later is there already, it is called "Orbiter's launchpad scenario tab".

In order to have 2 or more people interact over the internet, you need some kind of infrastructure. The important role here would then be the one of the operator (think Matrix if you need a picture). The operator coordinates media use of pilot and supervisor during the mission, so the later two can concentrate on their job and not on the frakin' TeamSpeak server failing again. In addition, the operator is watching the mission, too. He can assist in the later scoring, and he is witnessing the whole process. This way, an angry crash-pilot can't go ballistic about unfair treatment by the supervisor (well, at least not without getting a good bash from the operator).

Last but not least there is the administrator (no, not the one of manned space-flight or the CA from my previous blog entry). This person simply keeps a list of challenges and organizes setups, where pilot, supervisor and operator come together. This one would hold the high scores, too. If it is done on forum boards, via email or with a dedicated website is irrelevant to the definition.

Keep in mind that nothing stops a real person to take more than one role. It would just make no sense if you are pilot, supervisor and operator in the same setup of a challenge created by yourself on the list administrated by yourself. But what is a setup, anyway? A setup is an event where pilot and supervisor come together by means of communication channels managed by the operator. This could vary from IRC channel sessions with save-game transmission over TeamSpeak chats with streaming screenshots to multiplayer infrastructure like OMP or vMC. It is a meeting, so to say, for the duration of the complete mission or only parts of it, where the pilot is not alone in the dark, but has to hone his social skills, too. I'm sure it won't harm anyone, and if you can't stand even that small of a company, why do you want to "fly together with friends" or "be part of it", anyway? If it is only for the fancy titles, go create a VSA or anything...

So now you've flown your mission and crashed that itchy-bitchy XR2 on this simple traffic pattern round. Gear is dead. Supervisor is angry and gives you a -3. This will go on to the list the "challenge" was on (yeah, it is a challenge for many to go around the spaceport and land safely). This list is basically listing the outcome of all the setups. It could be a bunch of actual lists, too, one showing the setups a pilot has flown (giving pilots a kind of ranking), one showing the points gathered from a particular challenge (giving challenges a natural degree of difficulty), one displaying the points supervisors give (showing the most mean of them ). It is really up to the administrator here what is actually listed. And if you don't like the list, don't go to it. Go to the next one, because nobody says that only one list has to exist.

The scoring I have in mind here should be really simple. Let's make it a number in the range of [-5,+5]. -5 you'll get if you completely borked it, scared the tower to death, killed the crew due to finding it funny to undocked with inner doors open. 0 you'll get if you at least managed to stay alive. +5 if you achieved all goals, did an elegant and smooth reentry, bought supervisor and operator a beer... (ok, scratch the last one). Of course, a simple accumulated counter of scores will not show how good you really are. You can easily have a smaller accumulated score if you only fly hard missions one after the other, then one who only flies easy missions repeatedly. That's why I'm not proposing a complicated method of adding up scores from setups. Just list them and it will show your skills off just fine, almost like a pilot's real log.

As you can see, this outline is far from giving a detailed specification for a new VSA, heaven forbid! It is a sketch-up for what I'd like to call a minimalistic Orbiter Challenges with no complex automatism and more "let's come together" in it.

Comments

Sounds like a good idea. A couple of points from skim-reading your post in my hungover state.
Automated scoring is something to strive for as otherwise someone has to watch the missions and rank them, which will eventually (in some cases) lead to this not getting done (scoring person goes on holiday/loses interest/gets hit by a bus) and whilst scoring in an MFD will require repeated updates as you said (for testing that it's docked etc etc) it may be easier to shunt all of this into the LUA scripting which leads itself to easier updates (modify a test file rather than recompile a module or resorting to XML for 'rules').

Scores should be continuous rather than discrete, otherwise you have a load of people who get 5/5 and are thus all ranked equal as the high-scorer. Part of the joy of competitions (and the curse) is the continual cat-and-mouse game of a small group of people trying to outdo eachother to clamber for the top-spot.

SPOF is a nice thing to avoid, but unless you can set up a distributed model for this you may have to rely on it. After all, O-F is a SPOF for the whole community anyway. It would be worth having a local cache of results to post though in case you can't contact the server at any point, rather than just posting an error and losing your awesome performance.

Automated scoring is something to strive for as otherwise someone has to watch the missions and rank them, which will eventually (in some cases) lead to this not getting done (scoring person goes on holiday/loses interest/gets hit by a bus) and whilst scoring in an MFD will require repeated updates as you said (for testing that it's docked etc etc) it may be easier to shunt all of this into the LUA scripting which leads itself to easier updates (modify a test file rather than recompile a module or resorting to XML for 'rules').

Fair point, but I don't think that you'll get automated scoring stable enough for a really flexible set of challenges. There is only so much you can test automatically. E.g., a human supervisor can judge the overall smoothness of the flight, the precision of button flicking, the "professionality" of procedure following. He could introduce random faults ("Hey, pilot, you are now suddenly out of APU, looks like a valve is broken. Do an EVA and fix it.") and somesuch. It allows spontaneous micro-challenges, gives it more of a simulator-training feeling.

As for the scoring person goes on holiday: it was not said that only the challenge creator can fulfill the supervisor-rule. It is in fact up to the pilot who he trusts to be a fair supervisor.
Just for clarification: I had no round-based game-play in mind like e.g. pilot flies mission alone, records it, sends recording to supervisor, supervisor plays it back and scores. I thought about a much more tighter coupling of pilot and supervisor, almost real-time (not simulated real-time, mind you).

I must say I really like the human supervisor vs. auto-score thing. It brings the people more together. And this is something that in my opinion lies deep inside the VSA and multiplayer demand out there.

Quote:

Originally Posted by agentgonzo

Scores should be continuous rather than discrete, otherwise you have a load of people who get 5/5 and are thus all ranked equal as the high-scorer. Part of the joy of competitions (and the curse) is the continual cat-and-mouse game of a small group of people trying to outdo eachother to clamber for the top-spot.

As I wrote, I wouldn't want to define the accumulation of challenge scores yet. This is up to the list administrator. If one list is managed as typical tournament, so be it. If another is just a loose logging, it is fine, too. You can have more than one list running, what brings me to the last point you mentioned:

Quote:

Originally Posted by agentgonzo

SPOF is a nice thing to avoid, but unless you can set up a distributed model for this you may have to rely on it. After all, O-F is a SPOF for the whole community anyway. It would be worth having a local cache of results to post though in case you can't contact the server at any point, rather than just posting an error and losing your awesome performance.

The O-F argument is of course valid, but the community already demonstrated that it quickly jumps ship to the next working place if something breaks. After all, a forum is something general, everyone knows what it should look like.

Tofitouf's idea is rather tightly bound to the infrastructure IMHO: you need the addon-manager, you need the scoring server, you need the website for listing. I bet it is not so easy to migrate it.
Not to say that this is bad per se. I just think the idea itself can be minimized to the concept of challenges alone. If everyone knows how a challenges system works, he/she can rather easily build a list somewhere else, thus creating a fall-back for the established "SPOF". If the knowledge of setting it up is more rare, this is not the case. That's basically what I meant with ivory tower.

For setting up a distributed model, it would be possible to run a list in one of the new distributed Wiki engines (using Git or Mercurial as back-end). Presto, SPOF completely gone. There would still be a "central" - or better "blessed" - list that everyone goes to and trusts. But in the event of server breakdown, nothing is lost, setups can still be scored, mission can still be flown. Wait 'till the server is up again, then push your mission scores.