I'm not sure whether this would solve your problem, but I do have an updated version that dramatically improves the efficiency of transfers that don't have a 'Content-length' header - but I haven't had time yet to update the downloadable files on my blog yet. If you send me an e-mail at 'kris at rorohiko.com', I'll send you a copy of the latest version to try - maybe it'll fix the problem for you.
Ok, I repackaged the updated script into the download linked to the blog. Download link can be found at:
I'd be interested to hear whether it fixed the problem...
Kris: With respect to:
Adjusted the script to give much faster downloads in case the Content-Length header is not present in the web server headers. Also changed the protocol to HTTP/1.0 instead of HTTP/1.1 to sidestep the issue of 'chunked' downloads - support for 'chunked' HTTP is left as an exercise.
It appears that, if recent questions are any guide, use of HTTP/1.0 is probably insufficient. There were 2 or 3 questions this month on this forum that turned out to be cases where web servers gave confusing/wrong answers with HTTP/1.0 that worked properly with HTTP/1.1. In one case it was especially strange because some URLs on the server worked with 1.0 and others required 1.1.
I suppose it might be sufficient to use HTTP/1.0 with a Host: header, I dunno. But I'd worry about HTTP/1.0...
Hmm... Interesting. A few months back, I drug GetURL out of my software-closet and tried to use it with one particular server and found out it became the victim of 'chunked HTTP', where a large packet is 'chunked' with some addtional protocol layer interspersed.
That threw the script a curve ball, and it did not handle that well. As GetURL is all pretty much 'ad hoc' code, rather than a full-fledged HTTP library (i.e. I tweak it as needed in each individual project, and the list of what it does not do is much longer than the list of what it does do) , I chose the easy way out: instead of implementing 'chunked HTTP' (which is not all that hard), I figured: if chunked HTTP is not part of the HTTP/1.0 spec, I'll simply force the use of HTTP/1.0 - so I simply changed
"GET /" + parsedURL.path + " HTTP/1.1\n" +
"GET /" + parsedURL.path + " HTTP/1.0\n" +
Yup, I know - that's lazy, eh! But that fixed it for me - and given the time constraints that reality always seems to be throwing at me, I did not do any kind of further 'deep' research into the area.
If anyone is bumping into issues with GetURL: if you want me to have a look, you can send me a packet dump - use Wireshark or something similar to 'capture' the packets. That often gives me a good clue as to what is going wrong. However, no guarantees that I'll have a solution - and it might take a wee while before I have time.
Thank you so much for the update, and the information!
I'm sorry to say that I was not able to detect any change in speed after updating the script. Probably there was already a length attribute - and also the change to HTTP 1.0 made no difference.
But today we finally got the script to run fast again!
There is obviously some strange thing with the DNS at the customer's place, and difficulties resolvning server names, which I was not aware of (and neither was the customer until now).
The older OSX 10.5.8 has no trouble finding the webservice using http://servername/...
But the ones updated to OSX 10.6.8 or higher obviously have problems with this.
Changing the path to http://123.456.7.8/... (example IP) made the scripts run as fast as ever, on those machines as well.
Thank you for all helpful comments!