1 person found this helpful
Designating a "host" client is in no way better than the traditional sever/clients model due to the following:
a. you would have to either direct connect each peer to the host, which is innefficient for large groups and not always possible(see NAT issues), or route messages to the host which may turn out to be slow
b. average latency of clients-to-server vs one-client-to-all-the-others should be in favor of the former, given that a server is generally connected to a network backbone
c. individual P2P connections may fail and/or the group may "rewire" itself at any time so detecting when the host leaves is tricky; not to mention efficiently designating the host, rocket science may be easier
The way I see it, you have the following options:
1. Build it entirely distributed:
-each peer posts individual updates to the group
-everybody listens to all updateds and creates own copy of the state; processing overhead should be no bigger for any peer than if it had to be the host
-have the model loss resilient; some individiual updates may be lost to some, further messages have to be self-sufficient (i.e. declare absolute grid positions not step movement); timestamp every message, they may even arrive in reverse order
-obfuscate the data model to give cheaters a hard time; keep a backup (yet more obfuscated) data model and swap if the current is compromised; have each peer be suspicious about every update, as if it was the server and maybe report suspected illegal moves
2. Use RTMFP unicast with FMIS4(the $4.5k one)
-same client/server model as RTMP
-managed, no way to cheat (or is there?)
-lowest latency due to RTMFP yet slightly increased overhead due to encryption
-RTMP fallback may still be needed for some with firewalled UDP
3. Use a hybrid of managed and distributed architecture
-connect peers to the server and also with each-other into the mesh
-manage security at server level
-peers send frequent updates to both server and mesh
-peers receive frequent updates from the mesh yet unfrequent updates from the server; server copy of the state is both a backup for data loss in the mesh and a doublecheck of data integrity
-whichever individual piece of information is received earlier, via either mesh or server is assimilated to the local state and rendered
-have peers that have low latency in the mesh request server to update them more often
The choice from above depends a lot on the specifics of the application, maximum group size(for P2P bigger is better, huge is awesome), requested latency, reliability and security.
Ah Great. Thanks for clearing that up for me.
I do like the sound of using a hybrid approach, seems like it could be the speediest.
Thanks for sharing. If anyone else has more ideas It would be great to dump them here.