In addition, the impact in the future will be complicated for you to achieve rankings on search engines if the bot’s ability is poor. Furthermore, you will lose the benefits and advantages of SEO. The following discussion is how does it look like?
How Does It Look Like of Crawler-Friendly Website?
Crawler-friendly
Not crawler-friendly
You can see the difference between a crawler-friendly and one that isn’t. Robots will have difficulty reading and understanding the content. The cause of not being crawler friendly is because only the javascript is read by the robot while the HTML and CSS contained on the website are blocked by javascript. Next, we will discuss what tools are used to check your website and how to handle websites that are not crawler friendly.
How to Check for Crawler-Friendly Website?
There are 3 methods to check for a crawler-friendly:
1. Using Google’s Mobile-Friendly Test tool
The first method you can use is Google’s Mobile-Friendly Test tool.
The first step is to first search for the tool on the google search engine.
In the second step, when you have found it, enter your website link and do a run test
In the third step, click “view tested page.” Appears beside the screenshot results.
The fourth step, if the screenshots obtained still do not show, then you can check again with a copy of the HTML code
Then search for HTML viewer, select HTML viewer code beautify
Paste the HTML code in the left box, then click run/view
If the results in the right box show the content of your website, then your page is already crawler-friendly.
2. Using Screaming Frog
The second method is to check it on a screaming frog desktop application. You can download it first if it’s not there.
The first step is to open the screaming frog application and enter your website link
Click start, and the screaming frog will crawl your website
If the address list does not contain an HTML page, only a javascript file, this indicates that your website is not crawler friendly
3. Using Chrome Dev Tools – Network Tools
In the last method, you can check it via inspect element.
The first step is to open your website, then right-click and select inspect element
Select network
Click network conditions
Uncheck use default browser, select Googlebot
Click reload, and the result will be like this. Then click your website URL. If the display in the preview reflects your website content, then the website is crawler-friendly
If My Website is Not Crawler Friendly?
What to do if your website is not crawler friendly? This step is divided into 2, which include In-development and existing websites.
In-development
- Get buy-ins ASAP from the dev team. Make sure they understand this.
- Give a few references:
- Google for: “SEO for developers.”
- Google for: “javascript SEO google.”
Improve existing websites
Rework to make it server-side rendering
Server-side rendering allows developers to populate web pages with custom user data directly on the server. It is usually faster to make all requests on the server than to create additional browser-to-server round trips. This is what developers typically do before client-side rendering. For more information, you can read the article Understanding server-side rendering.
Implement dynamic rendering
With good dynamic rendering, your website content can be appropriately indexed. Not all pages need to use dynamic rendering, and keep in mind that dynamic rendering is a workaround for crawlers. Dynamic rendering requires your webserver to detect crawlers by checking the user agent. Requests from crawlers are routed to a renderer, requests from users are usually served. If needed, the dynamic renderer performs a crawler-friendly version of the content, such as filling a static HTML version. You can choose to enable dynamic rendering for all pages or per page. Furthermore, you can read this article for your reference: Implement dynamic rendering.
Using 3rd party pre-render service (for example, using prerender.io)
Prerender.io renders your website regularly using the latest Chrome. It can then save all generated HTML pages into the database. Apart from that, it also gives you an API for it so you can access the rendered HTML for each of your website URLs.
The only thing you need to do is add a proxy that checks for user agents. If the user agent is a search engine or some crawler (Facebook, Linkedin, etc.), you can send an API call, get the rendered HTML from prerender.io, and return it to the crawler. If the user agent is not a crawler, you can return your SPA’s index.html so the JS will enter. Prerender.io has configurations for all familiar web servers. Apache, Nginx, HaProxy, Express, etc.
But if you don’t need any resources to develop, you should try to look at our software development services that can be included SEO for your website.