Postby BeeJay » Fri Sep 07, 2018 2:36 pm
If the file is being read on the presentation client, and then processed on the AppServer, then that will add some overhead for getting the data transferred from the thin client to the AppServer node. This would be worse if their network connectivity is particularly bad/slow.
For example, reading a 500,000 line file:
SingleUser - took ~ 5s to read every line in the file (file was on an SSD drive)
ThinClient - high speed lan was going to take ~ 15 minutes. Was averaging around 1.8s per 1000 lines.
Thin client - fast(ish) wireless was going to take ~ 41 minutes. Was averaging just under 5s per 1000 lines.
Thin client - slow wireless was going to take ~ 9.4 hours. Was averaging around 68s per 1000 lines.
For interests sake, I repeated the same test using logic to read the file in chunks with 'readString' instead of doing readLines, and then using my own logic to parse these chunks into 'lines', including logic to handle lines which were split between chunks. This showed the following timings:
SingleUser - took less than 1s per run - it was averaging in the range of 0.70s and 0.75s
ThinClient - high speed lan took ~ 14s.
Thin client - fast(ish) wireless took ~ 15s.
Thin client - slow wireless tooks ~ 28s.
So the chunked approach is far less susceptible to the impact of network speed than readLine. If your reading of the file is via a thin client file system, you may want to consider using a similar chunked read approach to reduce the impact of slower connections.
Cheers,
BeeJay.