The users want to play a multiplayer game, without an external server.

One solution is to host an authoritative server on one phone, which in this case would be also a client. Considering point 1 this solution is not acceptable, since the phone's computing resources are not sufficient.

So, I want to design a peer-to-peer architecture that will distribute the game's simulation load among the clients. Because of point 2 the system needn't be complex with regards to optimization; the latency will be very low. Each client can be an authoritative source of data about himself and his immediate environment (for example bullets.)

What would be the best approach to designing such an architecture? Are there any known examples of such a LAN-level peer-to-peer protocol?

Notes:

Some of the problems are addressed here, but the concepts listed there are too high-level for me.

Security

I know that not having one authoritative server is a security issue, but it is not relevant in this case as I'm willing to trust the clients.

Edit:

I forgot to mention: it will be a rather fast-paced game (a shooter).

Also, I have already read about networking architectures at Gaffer on Games.

3 Answers
3

Take a look at this article about the networking architecture of Age of Empires II.

They managed to create a multiplayer game that ran great on a Pentium 90 with 16 MB RAM and a 28.8 kB/s modem connection. They did this by having each player run their own simulation, but synchronize their commands.

I've done this for a commercial PSP racing game, which worked both over an ad hoc network, and via a wireless hotspot. (Or to a static server on the Internet, if desired)

Because of point 2 the system needn't be complex with regards to optimization

In my experience, this is not true. Wireless devices (especially small portable ones) are not like computers with wired network connections -- smartphones and wireless game consoles tend to have slow network interfaces for game purposes.

Don't get me wrong -- their throughput is usually good (that is, the amount of data per second; great for streaming movies or etc), but the latency on delivery of a particular packet can be extremely bad, and can be so highly variable that it's difficult to even estimate how long any individual packet will take to deliver. This variation becomes even worse as more wireless devices are packed together into one general area, as their signals start to interfere with each other. As a result of this, quite a lot of my time was spent in reducing the number of packets that needed to be sent, so we would have fewer packet collisions. (Note that this is somewhat less of a problem in the case where a powered network hotspot is involved, rather than having the devices talk to each other directly over an ad hoc network)

As an example of this sort of optimisation, our racing game took place in a world which had traffic lights. Thousands of them. And we needed to make sure that their signals were in synch between all the players in a network session. Rather than try to send packets around telling everyone which lights were in what state, we defined a static schedule for all of the traffic lights, and then just made sure that all the clients agreed on the current "game time". Since they all knew the game time, and all the traffic lights' states could be determined from the game time, we synchronised all that state data without actually sending any special data. This one change made a huge difference for our networking performance.

What that said, establishing a reliable clock synch between multiple wireless devices (with highly varying ping times due largely to packet loss) was a huge challenge. Happy to talk more about that if you've an interest.

Each client can be an authoritative source of data about himself and
his immediate environment (for example bullets.)

This is what we did, and it worked well for us in our situation (cars). The problematic part, of course, is when an object stops being closer to player 'a' than to player 'b', and its ownership therefore transfers from one player to another.

This is actually a surprisingly complex negotiation between players, where game 'a' proposes to game 'b': "I think this object is closer to you. You should take control of it." And then game 'b' may either accept, or may decline, based upon its own view of the situation. Differences in the perceived game state between 'a' and 'b', and the change in time between when the request and response are sent and received makes this a particularly nasty little negotiation to get reliable, and it can easily degenerate into a game of "hot potato", with object ownership bouncing around continually between multiple players. And even when it does work properly, when viewed from the vantage point of game 'c', there's often a small visual discontinuity when an object switches from the ownership of one player to the ownership of another, just due to the different ping times that game 'c' has to the other two players.

My intuition is that this sort of "object ownership" approach is likely to be too cumbersome for small, short-lived objects like bullets. We used it for traffic cars and AI racers, which tended to live in the simulation for a relatively long time. It seems like a more performant approach, if you're willing to trust the clients, would be to have each player's game own their position and their projectiles, and declare when that player has been hit by someone else's projectile. (So as "game A", I'm responsible for saying where player A and player A's projectiles are, but player B is responsible for saying whether I've hit player B). With some good dead-reckoning, you should be able to get pretty reasonable behaviour out of a system like this.

You start by counting your game frames after the game starts.
One client render the 1st frame then send the ready message to other clients and wait to receive ready message from other clients.

Now all clients can go for 2nd frame. Render the frame, send and receive commands, update world, update physics and ... after that sending ready message to each other.

This solution is very good for LAN games and all of your clients will be in sync.

with this type of networking you can be sure all of your clients are in sync so you can test the best way that suits your need.

1st way is only send the inputs to others and each client simulate running, firing , collision detection and etc the 2nd way is each client send the information about his character to others like position, rotation, state, animation frame and etc so other clients only calculates their stuff and send them over network but first way is more secure.

This answer seems to be about the game loop. I think the OP is asking about the load distribution system; specifically, who simulates what and how the clients communicate. Could you go into some more detail? I am interested in this problem as well.
–
WackidevJun 23 '12 at 10:52

@Wackidev with this type of networking you can be sure all of your clients are in sync so you can test the best way that suits your need. 1st way is only send the inputs to others and each client simulate running, firing , collision detection and etc the 2nd way is each client send the information about his character to others like position, rotation, state, animation frame and etc so other clients only calculates their stuff and send them over network but first way is more secure.
–
kocholJun 24 '12 at 6:53

So please put that in your answer. Right now I don't think it answers the question. Also, please delete the first duplicate comment. Thanks. As to the idea itself, doesn't the first option you listed require the simulation to be deterministic?
–
WackidevJun 24 '12 at 16:58