[Subject Prev][Subject Next][Thread Prev][Thread Next][Subject Index][Thread Index]
Re: Somethings about libraries..
> In case of applications also, it is possible to share the same code
> two or more instances.. If required for each instance it will maintain
> different data segments (for eg., vi, netscape etc)
You know, I wonder about this...
It may be possible yes, but is it done in practice?
Take this e.g.
I compile a prog, a.out and run it, loading it into memory. I then switch
VT's, make a few changes to the source file and recompile. I then run the
output file (a.out) from the new VT...
My Q is this. How can the kernel know that the new program is different from
the one it loaded just a short while ago. If it tries to share code between
these processes, the result would be, well; interesting.
True, the kernel could keep a track of the timestamp and use checksums...
but I doubt Linux does this...
The shell loads a new prog by forking off a child and then exec()ing the
executeable. exec and it's cousins overlay the process memory with the new
data. There is no concrete relation between vi being run by two people on
the same machine, unless they're related by a non-execing fork.
Remember, the only reason the kernel can optimise a fork by sharing code is
because its *garanteed* that the child is identical to the parent. Can't say
that about two unrelated processed.
The reason subsequent Netscape widows load faster than the first is because
much of the shared code (in the libs) has been loaded already. In addition,
some of the data may still be in the disk cache...
Just my 2 paisa.