Websites with Error Pages Results in Google

    As part of my search engine marketing practice, when we talk to a new prospect, we do some digital analysis on the quality of their websites and search engine responsiveness before we provide a proposal. More and more these days I’m seeing websites with numerous errors, problems – and a great many of non-existent or bad web pages appearing in Google results. This is a huge problem for a website (or course we can analyze and fix these)! This blog post covers some of the problems that can be incurred from such technical errors, and some of the problems themselves or reasons as to why they occur.

    So if a website has too many URLs indexed (i.e., more web pages appearing in Google results than are shown in navigation), why the concern? Isn’t it good to have a lot of search results for SEO? What’s the worst can happen? Here are some answers on that:

    • Websites can be “blown up” or taken down or disabled (via hacking or virus).
    • Repeated “Page not found” errors will appear to searchers when clicking on links to your site from Google.
    • Your domain can be blocked by search engines (meaning they won’t show it in listings, or they’ll drop it in rankings)
    • Your domain can be blocked by email service providers for reasons of spam suspicion.
    • Hackers can get in and do whatever they want with your site.
    • Files you don’t want publicly viewable can be seen (e.g., DB files or info supposed to be hidden behind login).
    • Customers can be confused about which is real content on your site vs. which is fake or old.
    • Outside eCommerce sites or shady offers can use your site to backlink, promote or sell their content.
    • Your domain may be so heavily blacklisted by Google that you may need to change domains.
    • Your brand will be associated by web users as one of faulty website and technical errors and overall problems.

    And here are some more specific related technical problems and reasons for too many indexed pages:

    • Faceted navigation (e.g., eCommerce sites that give the same pages via multiple categories or options)
    • hacking
    • spambot fake URL creation (e.g., for spam email promotion)
    • bot penetration of faulty tech, i.e., unupdated DBs, themes or plugins such as sliders
    • Old web pages now removed but still indexed by Google
    • Old websites still on the back of the new live site
    • content thought to be behind login (e.g., white papers) but actually indexed by Google
    • Blogs/news sections of sites
    • Poorly constructed subdomains
    • cloud storage hacking
    • Sites which don’t properly redirect to (or vice versa)
    • Sites with ghosted domains or content mirroring (same site visible across multiple domains without redirects)
    • Sites with “archived” content which contain multiple URLs (which would be the case unless the site/SEO architecture was customized with noIndex commands for proper tags or permalinks).
    • Poorly implemented eCommerce software
    • Too many menu or eCommerce tool integrations
    • Under-the-hood items such as DB files and system files which were not closed off from search engines in Robots file.
    • In-site search results indexed.
    • Dynamically-generated page URLs (such as eCommerce products with multiple features).
    • Virus creates fake URLs

    Thanks for reading,
    Jake Aull, Zen Fires Search Engine Marketing