Please enable JavaScript to view the page content

I'm trying to access this page: https://ccmspa.pinellascounty.org/PublicAccess/Search.aspx?ID=400&NodeID=300,200,23001,11000,11100,11200,11300,11400,1000,12000,2000,13000,3000,11500,22601,11101,11102,11103,11104,11105,11106,11107,11108,11109,11201,11202,11203,11204,11205,11206,11207,11208,11209,11210,11112,11113,11114,11301,11302,11303,11304,11305,11410,11450,11411,11412,11451,11452,11453,12100,12200,12300,11110,11111,12101,12102,12103,12104,12105,12106,12107,12108,12109,12110,12111,12112,12201,12202,12203,12204,12205,12206,12207,12208,12310,12320,12311,12312,12321,12322,12400,11501,11600,11601,11602,11603,11604,13100,13200,14100,14200,14300,14400,14500,23002,23003,23004,5000,7000,21200,21300,21400,21500,22100,22200,22300,22400,22500,22700,21301,21302,21303,21304,21305,21306,21307,21308,21309,21310,21311,21312,22301,22302,22303,22304,22305,22306,22307,22308,22309,22310,22311,22312,21401,21402,21403,21304,21405,21406,21407,21408,22401,22402,22403,22404,22405,22406,22407,22408,21501,21502,22501,22502,21601,21602,21603,21604,22601,22602,22603,22604,23002,23003,23004,12113&NodeDesc=Pinellas%20County

and all I see on the page is "Please enable JavaScript to view the page content".

I'm using windows 10 and tried both screen scraper pro 6 & 7.

Thanks,
Jack

Please refer to this post:

Please refer to this post: http://blog.screen-scraper.com/2015/10/28/dynamic-content/

Dynamic Content

Do you have a sample scraping session how you put this all together?

Javascript

I think I have the same issue. I read the blog link but didnt understand. I have this code in the details page. I am not spamming as the users will have requested demo's from us but trying to do email verifcation. Can this be achieved ?

<p>Email: <span id="cloak95458">This email address is being protected from spambots. You need JavaScript enabled to view it.</span>
<script type='text/javascript'>
 //<!--
 document.getElementById('cloak95458').innerHTML = '';
 var prefix = '&#109;a' + 'i&#108;' + '&#116;o';
 var path = 'hr' + 'ef' + '=';
 var addy95458 = 'br&#117;n&#111;.h&#105;cks&#111;n' + '&#64;';
 addy95458 = addy95458 + 'l&#105;ncs&#101;l&#101;ctr&#105;c&#97;l' + '&#46;' + 'c&#111;' + '&#46;' + '&#117;k';
 var addy_text95458 = 'br&#117;n&#111;.h&#105;cks&#111;n' + '&#64;' + 'l&#105;ncs&#101;l&#101;ctr&#105;c&#97;l' + '&#46;' + 'c&#111;' + '&#46;' + '&#117;k';
 document.getElementById('cloak95458').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy95458 + '\'>'+addy_text95458+'<\/a>';
 //-->
 
</script>

This is not dynamic content.

This is not dynamic content. They are just using calculations in JavaScript to obfuscate the email address.

The things that start & and end ; are character entities. "m" is the letter "M".

You could decode the characters with a script like this run on an extractor pattern before the pattern is applied

import org.apache.commons.lang.StringEscapeUtils;

test = StringEscapeUtils.unescapeJavaScript(scrapeableFile.getContentAsString());
scrapeableFile.setLastScrapedData(test);

Then you will need to replicate all their concatenations to get the real address.

Dynamic Content

Jason,

I was asking for an example of scrapable session for your Dynamic Content post: http://blog.screen-scraper.com/2015/10/28/dynamic-content/

Where do you call the JS and the JSON ?

Jack

Dynamic Content

Jason, any update on how to get your infinite content demo to work?

All you need to do is make a

All you need to do is make a session, and had a scrapeableFile for each of the 3 files I listed, in order. Because it's a simple sample, there's nothing to extract. On a real site there would be. But if you scrape, and look at the landing page, you can see it's just a bit if JavaScript that doesn't reflect what the browser shows you.

Thanks but I'm not sure if

Thanks but I'm not sure if this is the same as the problem I'm facing with this site: https://ccmspa.pinellascounty.org/PublicAccess/default.aspx

There are no external JS files to be called. There is only a JS that's included in the header of the page. When I go through proxy the only thing that's being called is the page itself and the CSS file.

Help!!!

Can such sites be done by

Can such sites be done by this software?

I do them every day. It's

I do them every day. It's just hard to develop because the JavaScript is hard to parse, but once you get it made the scrape will work fine.