OK I got it. That was obvious, I didn’t see it at first sight.
Here is the content of the only page you grab:
<META NAME="robots" CONTENT="noindex,nofollow">
You can verify it on the command line with HTTPIE (
apt-get install httpie):
Moreover, it appears your site is running Incapsula CDN, which prevents crawlers from doing their jobs (that’s a choice you, your company or Incapsula made).
As a workaround, you may ask (if possible) Incapsula to allow the user-agent “asqatasun” (name used by our crawler). You may also audit the site from “inside your company”, I mean without the CDN. You should have a way to access it internally from within your company, on a pre-prod environment or something like that.
To be sure, you could verify that between your last audit of 1000 pages and now, the CDN has been setup.
As a side note, you launched the audit against AccessiWeb 2.2 (aw22) referential, which has been deprecated for 5 years (left for historical purpose, and will be removed in next major version). You should use RGAA instead
Hope this helps!