[Subject Prev][Subject Next][Thread Prev][Thread Next][Subject Index][Thread Index]
Server side or client side processing?
I'm still working away at that Distributed File Sys project of mine. :-)
Reading the design docs of some of the other DFS's around, I noticed that
many of them push as much processing as they can onto the client. In CODE
for e.g. when the client writes to the server, it has to update all the
servers with the latest copy of the file.
I'm thinking of producing something that is geared towards really thin
clients (phones / PDA's) which are operating under band-width restrictions.
In this case writing to all the servers would be tedious and slow. Why not
write to one and let the servers update each other over the high speed
backbone they're connected to?
However, the protocol should also work with high speed stuff. Laptops,
Desktops etc connected over a high speed lan.
What's the opinion of the list? Would it be better to push processing onto
the servers and let the admins worry about upgrading the hardware, or
keeping as much as possible on the client?
The point is that phones etc are getting smarter and faster and there are
new standards comming in which should make bandwidht much more reasonable.
I'm quite confused...