ssl_error_rx_record_too_long

I am currently trying to set up a scrape on a site to automate the download of orderresponses that they host as PDF-files. But I've run into two problems with it.

1. I tried to setup a proxy session via screen-scraper like I have always done, but after starting the proxy and changing to the settings in my respective browser - whenever I navigate to this very site it gives me the error message "ssl_error_rx_record_too_long" (http://imgur.com/qIMR4uQ) and I have absolutely no clue how to fix it. Furthermore it happens with all browsers I tried and it seems to only happen on this machine.

For training purposes I setup a basic version on another PC in our network and there it had absolutely no problems. But because the one running into this error is the machine we've got our enterprise version on and with which we always do our scrapes we absolutely need to fix this issues.

I would be very happy for any clues as to what could cause this issue.

2. The second one is more of a minor nuisance. On that site they don't call any downloadlinks upon clicking the downloadbutton, but link to and execute a phpscript which serves the respective file directly as a http response containing the raw contents of the file for you to save.

Normally I used the downloadFile() method for any download related issues, but you can't do the same and pass the link to the phpscript to the method, can you?

Is there any easier way around it than to mess with and save the http response via a file/outputstreamwriter?

The proxy is something you

  1. The proxy is something you inject between the site and your browser, so some sites just don't proxy right. There are some OS settings you might try, but I don't know what OS your on. Aside from that, sometimes you have to fall back on the developer tools in the browser to help you build the request.
  2. I think you're looking for scrapeableFile.saveFileOnRequest()

The machine we are talking

The machine we are talking about is currently using...

CentOS Version 6.6

based on:
Linux Kernel 2.6.32-431.17.1.el6.x86_64
GNOME Version 2.28.2

So what OS settings do you suggest that would either help to fix or at least narrow down the cause of the issue.

I can get that error when the

I can get that error when the port isn't set correctly, but other than than I can't get it. What version screen-scraper do you have?

The port screen-scraper's

The port screen-scraper's virtual proxy runs through? That is the standard one 8777. But if that port wasn't set correctly wouldn't it be impossible to catch any communication at all and not just run into this problem with HTTPS / SSL-encrypted sites?

Furthermore if you actually create a scrapeablefile for the URL of that site and start the scraping-session - you get a handshake error.

As for the version we are using - it's version 6.0 (stable) - Enterprise Edition.

Edit:
One thing that could cause an issue is - that as far as I know the machine running screen-scraper isn't connected to the internet directly. It should have at least a proxy or ISA-server between itself and the internet-connection. But it shouldn't block any communication and as a matter of fact I had none problems with it as far as any non-HTTPS scrapes are concerned. But if you suspect that to be the issue - I need to know what to tell our IT administration / what exactly they need to do / what settings they have to check/change. I have only the necessary privileges to administrate that very machine.

Since you're getting an error

Since you're getting an error on the scrape also, it is most likely screen-scraper. Since we've been adding a lot of HTTPS functionality, the first thing to do is get the newest version. 6.0.61a can out today, and it's a major upgrade so may have some issues--if you see any please report them--but I also think it's the only thing to do.

If you can open the workbench:

  1. Go to options > settings and check the box to "allow upgrade to unstable versions"
  2. Save and close
  3. Click option > check for updates

Otherwise:

  1. Find the file "ss_update.py"
  2. Validate the file has execute permissions, and run it

Just for completeness sake

Just for completeness sake and anyone who runs into the same problem: Updating screen-scraper and the used java version did the trick.

@jason: thx for the help