Use Website Audit API to grow rankings and traffic
We’ll follow a simple, repeatable flow:
- Launch a website audit.
- Retrieve the audit report and health score.
- Identify high-impact crawlability and indexability issues.
- Get affected URLs for each issue.
- Analyze internal linking and content issues.
- Inspect individual pages with ranking potential.
- Recheck the audit after fixes.
Step 1. Launch a website audit
Start by creating a new audit for the domain you want to improve. This crawl is the foundation for identifying ranking and traffic blockers.
- Use a standard audit for most websites.
- Use an advanced audit for sites heavily relying on JavaScript.
max_pages to control crawl size.Example: Launch a standard audit
curl --location 'https://api.seranking.com/v1/site-audit/audits/standard' \
--header 'Content-Type: application/json' \
--header 'Authorization: Token YOUR_API_KEY' \
--data '{
"domain": "example.com",
"title": "Example Audit",
"settings": {
"max_pages": 1000,
"max_depth": 10,
"check_robots": 1
}
}'Expected outcome: returns an audit ID, e.g., 700237036.
Step 2. Retrieve the audit report and health score
Once the audit is complete, fetch the full report using the endpoint Get audit report.
Example report highlights:
{
"score_percent": 80,
"total_errors": 346,
"total_warnings": 979,
"total_notices": 3915,
"sections": [
{
"name": "Security",
"props": {"no_https": {"status": "error"}}
},
{
"name": "Crawling & Indexing",
"props": {"http4xx": {"status": "error", "value": 28}}
}
]
}
You can see from the report:
- overall site health score
- total errors, warnings, notices
- key issue categories (Crawling & Indexing, Security, Content)
Step 3. Identify high-impact crawlability and indexability issues
Next, focus on issues that most directly affect rankings and traffic. These issues often determine whether search engines can crawl and index your content at all:
errors before warnings or notices, as errors have the highest SEO impact.4xxand5xxerrors- pages blocked by
noindexorrobots.txt - canonical tag conflicts
- duplicate content
- hreflang issues for multilingual sites
Step 4. Get affected URLs for each issue
For each critical issue, fetch the list of affected URLs. This turns problems into concrete tasks for your team.
Example: URLs affected by title_duplicate
{
"total_urls": 143,
"urls": [
"https://example.com/page1",
"https://example.com/page2"
]
}Step 5. Analyze internal linking and content issues
Use the endpoint Get all crawled pages to pull page-level metrics and identify patterns:
- weak internal linking →
< 5internal links (threshold may vary depending on site size and structure) - low word count (potential thin content) →
words_count < 300 - duplicate titles or H1s →
title_duplicate,h1_duplicate - Pages with many issues →
errors,warnings - Low crawl priority → high
depth+ lowinlinks
Step 6. Inspect individual pages with ranking potential
Drill down a high-value page using the endpoint Get all issues by URL. This helps you:
- understand why the page underperforms
- see all technical and on-page SEO issues
- align SEO fixes with content optimization
Example page inspection highlights:
{
"page_data": {
"title": "SEO Tips",
"words": 1820,
"inlinks": 12
},
"issues": [
{
"code": "title_duplicate",
"type": "notice"
}
]
}
Step 7. Recheck the audit after fixes
After resolving issues, launch a recheck with the same settings.
- Use a standard audit for most websites.
- Use an advanced audit for sites heavily relying on JavaScript.
Expected outcome: confirms issues are fixed, and the health score increased.
| Metric | Before | After |
|---|---|---|
| Health score | 78 | 86 |
| 4xx errors | 183 | 21 |
| Canonical conflicts | 91 | 7 |
| Crawled pages | 18,432 | 18,901 |
