ADP 2.1: How We Made Press Releases Readable by AI

← Back to Blog
11 machine-readable endpoints let AI assistants read our knowledge graph. Working code, live endpoints, and measurable results included.
Share:

ADP 2.1: How We Made Our Press Releases Readable by AI

Technical Implementation Note: This blog post implements everything it describes. The YAML frontmatter above contains ADP 2.1 metadata. The content below is structured for all five optimization layers. Every endpoint mentioned is live and testable. This isn't theory—it's documentation of working infrastructure.


The Problem We Solved

When someone asks ChatGPT "What press release platforms exist?", what happens?

  1. ChatGPT searches its training data (potentially outdated)
  2. It might use web browsing to find current information
  3. It parses HTML, guesses at meaning, and synthesizes an answer

The result: Your website might be ignored entirely, cited incorrectly, or confused with competitors.

We asked a different question: What if AI assistants could read our data directly?

Not parse HTML. Not guess at meaning. But fetch structured JSON that explicitly says:
- "Here are our 27 entities"
- "Here's how they relate to each other"
- "Here's what changed since you last checked"
- "Here's when to check again"

That's what AI Discovery Protocol 2.1 does.


What We Actually Built

11 Machine-Readable Endpoints

Every endpoint below is live. Click to verify.

Endpoint Purpose Format Update
/ai-discovery.json Meta-index JSON Hourly
/knowledge-graph.json Entity relationships JSON-LD Hourly
/llms.txt LLM context Markdown Daily
/llms-full.txt Extended context Markdown Daily
/feed.json JSON Feed JSON Realtime
/updates.json Delta feed JSON Realtime
/rss RSS feed XML Realtime
/sitemap-ai.xml AI sitemap XML Hourly
/.well-known/ai.json Discovery manifest JSON Daily
/.well-known/security.txt Security contact Text Weekly
/ai-discovery.md Human docs Markdown Hourly

HTTP Headers for AI Crawlers

Every endpoint returns these headers:

ETag: W/"cf9b00f48db9"
Content-Digest: sha-256=:qIUtmtfzBSpE2dQSKMGWVsx+Ly/...=:
X-Update-Frequency: hourly
Access-Control-Allow-Origin: *
Cache-Control: public, max-age=3600

Why these matter:

  • ETag: AI systems can check if data changed without re-downloading everything
  • Content-Digest: Cryptographic proof the data hasn't been tampered with (RFC 9530)
  • X-Update-Frequency: Tells crawlers "check back in an hour, not every minute"
  • CORS: Any AI tool can fetch this data directly from a browser

The Five-Layer Optimization Stack

We didn't just build endpoints. We built a complete optimization framework.

Layer 1: SEO (Search Engine Optimization)

What it does: Makes content findable by Google.

Our implementation:
- Schema.org JSON-LD on every press release (3-8 schemas each)
- Auto-generated meta tags (85-90/100 SEO scores)
- XML sitemap with Google News tags
- Do-follow backlinks to customer websites

Verify it: View source on any press release and search for application/ld+json.

Layer 2: AEO (Answer Engine Optimization)

What it does: Structures content as direct answers to questions.

Our implementation:
- FAQ schema on relevant pages
- Question-answer format in content structure
- Featured snippet optimization (first paragraph answers the query)

Example: This section you're reading answers "What is Layer 2 AEO?" in the first sentence.

Layer 3: GEO (Generative Engine Optimization)

What it does: Makes content citable by AI-generated answers.

Our implementation:
- Knowledge graph with entity relationships
- Explicit facts (not opinions) that AI can cite
- Source attribution in structured data

Verify it: Ask Perplexity "What is Pressonify?" and check if it cites our knowledge graph.

Layer 4: LLMO (Large Language Model Optimization)

What it does: Ensures LLMs interpret content correctly.

Our implementation:
- /llms.txt with 1,247 tokens of context
- /llms-full.txt with 13,451 tokens (50 PR summaries)
- Clear entity definitions (no ambiguous language)

Verify it: curl https://pressonify.ai/llms.txt | head -50

Layer 5: ADP (AI Discovery Protocol)

What it does: Provides machine-readable discovery infrastructure.

Our implementation:
- Meta-index at /ai-discovery.json
- Entity count at root level (entityCount: 27)
- Delta feed at /updates.json?since=ISO_TIMESTAMP
- Industry-standard .well-known/ai.json location

Verify it:

curl -s https://pressonify.ai/ai-discovery.json | jq '.entityCount'
# Returns: 27

The Technical Implementation

Helper Functions (Python/FastAPI)

import hashlib
import base64

def generate_etag(content: str) -> str:
    """Generate weak ETag from content hash."""
    content_hash = hashlib.md5(content.encode()).hexdigest()[:12]
    return f'W/"{content_hash}"'

def generate_content_digest(content: str) -> str:
    """Generate Content-Digest header (RFC 9530)."""
    sha256_hash = hashlib.sha256(content.encode()).digest()
    b64_hash = base64.b64encode(sha256_hash).decode()
    return f"sha-256=:{b64_hash}:"

Update Frequency Configuration

ADP_UPDATE_FREQUENCIES = {
    "ai-discovery.json": "hourly",
    "ai-discovery.md": "hourly",
    "knowledge-graph.json": "hourly",
    "llms.txt": "daily",
    "llms-full.txt": "daily",
    "sitemap-ai.xml": "hourly",
    "rss": "realtime",
    "feed.json": "realtime",
    "updates.json": "realtime",
    "well-known-ai.json": "daily",
    "security.txt": "weekly",
}

Delta Feed Endpoint

@app.get("/updates.json")
async def updates_json(since: str = None):
    """Return changes since timestamp."""
    if since:
        since_dt = datetime.fromisoformat(since.replace('Z', '+00:00'))
    else:
        since_dt = datetime.utcnow() - timedelta(days=1)

    # Query for updates since timestamp
    result = supabase.table('press_releases')\
        .select('id, headline, status, updated_at')\
        .gte('updated_at', since_dt.isoformat())\
        .execute()

    return {
        "schema_version": "1.0",
        "generatedAt": datetime.utcnow().isoformat() + "Z",
        "since": since_dt.isoformat(),
        "updateCount": len(result.data),
        "updates": [...],
        "nextCheck": f"/updates.json?since={datetime.utcnow().isoformat()}Z"
    }

What Changed: ADP 1.0 → 2.1

Feature ADP 1.0 ADP 2.0 ADP 2.1
Endpoints robots.txt, sitemap +ai-discovery.json, knowledge-graph, llms.txt +6 new endpoints
Headers Basic Cache-Control +ETag, Content-Digest, X-Update-Frequency
Delta Feed None None /updates.json?since=
Integrity None None SHA-256 Content-Digest
Discovery Manual /ai-discovery.json +/.well-known/ai.json
Compliance ~40% ~70% 100%

The Key Insight

ADP 1.0 said: "Here's my content."
ADP 2.0 said: "Here's my content, organized."
ADP 2.1 says: "Here's my content, organized, with caching hints, integrity verification, and incremental updates."

ADP 2.1 is enterprise-ready. It's what you'd expect from a production API, not just a discovery endpoint.


Measurable Results

Before ADP (September 2025)

  • Perplexity: "What is Pressonify?" → Generic answer about press releases
  • ChatGPT: Limited awareness, outdated information
  • Claude: No specific knowledge

After ADP 2.1 (December 2025)

  • Perplexity: Cites our knowledge graph, accurate pricing
  • ChatGPT: Fetches /llms.txt for real-time context
  • Claude: Understands our 16-agent architecture

Test it yourself: Ask any AI assistant "What is Pressonify and how much does it cost?"


How to Implement This Yourself

Minimum Viable ADP (30 minutes)

  1. Create /llms.txt: Plain text summary of your business
  2. Add to robots.txt: Allow: /llms.txt
  3. Create /.well-known/ai.json: Point to your resources
{
  "schema_version": "1.0",
  "organization": {
    "name": "Your Company",
    "url": "https://yoursite.com"
  },
  "ai_endpoints": {
    "llm_context": "/llms.txt"
  }
}

Production ADP (Full Implementation)

See our open-source reference: AI Discovery Protocol Specification


Frequently Asked Questions

What is the AI Discovery Protocol?

The AI Discovery Protocol (ADP) is a set of machine-readable endpoints that help AI assistants like ChatGPT, Claude, and Perplexity discover and understand your website's content. Instead of parsing HTML, AI systems can fetch structured JSON that explicitly describes your entities, relationships, and content.

How is ADP different from traditional SEO?

Traditional SEO optimizes for search engine ranking (appearing in Google results). ADP optimizes for AI understanding (being cited correctly in AI-generated answers). SEO asks "Can Google find this?" ADP asks "Can ChatGPT understand this?"

What endpoints does ADP 2.1 require?

ADP 2.1 compliance requires these core endpoints:
- /ai-discovery.json (meta-index)
- /knowledge-graph.json (entity relationships)
- /llms.txt (LLM context document)
- /.well-known/ai.json (discovery manifest)

Optional but recommended: /feed.json, /updates.json, /llms-full.txt, /sitemap-ai.xml.

What HTTP headers should ADP endpoints return?

ADP 2.1 endpoints should return:
- ETag: Cache validation (e.g., W/"abc123")
- Content-Digest: Integrity verification (e.g., sha-256=:base64hash:)
- X-Update-Frequency: Crawler scheduling hint (hourly/daily/realtime/weekly)
- Access-Control-Allow-Origin: *: Cross-origin access for AI tools

How do I test if my ADP implementation is working?

Use these commands:

# Check meta-index
curl -s https://yoursite.com/ai-discovery.json | jq '.entityCount'

# Verify headers
curl -I https://yoursite.com/ai-discovery.json | grep -i "etag\|content-digest"

# Test delta feed
curl -s "https://yoursite.com/updates.json?since=2025-12-01T00:00:00Z"

Does Pressonify automatically implement ADP for press releases?

Yes. Every press release published on Pressonify is automatically added to our knowledge graph, indexed in our AI discovery endpoints, and optimized across all five layers of the optimization stack. No additional configuration required.


Conclusion

We built ADP 2.1 because we believe the next decade of content discovery will be dominated by AI assistants, not just search engines.

The Five-Layer Optimization Stack ensures your content is:
1. Findable (SEO) - Google can index it
2. Answerable (AEO) - It can be a direct answer
3. Citable (GEO) - AI can reference it
4. Understandable (LLMO) - LLMs interpret it correctly
5. Discoverable (ADP) - AI agents can find it programmatically

This blog post demonstrates all five layers. The YAML frontmatter contains ADP metadata. The FAQ section provides direct answers. The code examples are real and working. The endpoints are live and testable.

We didn't just write about the future of content optimization. We built it.


Technical Resources

Try It Yourself:
- Generate ADP-Compliant Press Release - Experience our 16-agent system
- Check Your AI Visibility - Free 5-category analysis
- View Pricing - Enterprise-grade PR at startup prices
- Explore Related Posts - AI Search Crisis series, 16-agent architecture


Published: December 8, 2025 | Version: 2.9.1 | ADP Compliance: 100% (11/11 endpoints)

This post was written to demonstrate ADP 2.1 compliance. Every claim is verifiable via the linked endpoints.