AI, search, and content in 2025: What content creators and enterprises need to know

Ash Gawthorp, Ten10’s Chief Technology Officer, considers the impact AI is making on SEO, with the rise of A2A communications and how testing is the foundation of an AI-ready content strategy

AI-driven search is upending how both individuals and organisations discover information. Traditional search and SEO strategies are giving way to a landscape dominated by AI overviews, answer engines, and agent-to-agent (A2A) communications — reducing click-through rates and shifting value from clickable links to being cited or summarised directly by AI. Content creators and businesses must adapt to this new world by focusing on quality, semantic relevance, structured data, and enterprise-wide AI readiness.

The new search paradigm: AI’s impact

AI is changing the fundamentals of how people search:

  • AI-powered search platforms (ChatGPT, Google AI Overview, Perplexity) now account for growing traffic share while Google’s traditional dominance is slowly eroding.
  • Rising zero-click rates mean more queries are resolved on search platforms without users visiting the cited sites — up to 70% on some platforms.
  • AI algorithms increasingly use natural language understanding, entity recognition, and topical authority to summarise, cluster, or answer queries.
  • Brand and publisher traffic is dropping: 10–80% losses are possible depending on vertical, with many organisations seeing their highest-ranking pages returning fewer visits despite stable or even increased impressions.

How AI searches differ from human search

AI systems consume and interpret web data differently from human-influenced search algorithms:

  • Web Crawling vs. API vs. Agent-to-Agent: AI often collects information through massive web crawling (unstructured), API access (structured, real-time), and, for advanced platforms, A2A protocols (for direct agent communications and negotiated, contextual data packets).
  • Semantic Processing: AI leverages semantic ‘understanding’ — matching concepts or entities within text, not just keywords.
  • Retrieval-Augmented Generation: Many LLM-based platforms generate natural-language answers by synthesising from web documents, internal databases, proprietary research, and perhaps even agent-reported facts from partner APIs or other agents.
  • Citation & Mention, Not Just Hyperlinks: Being referenced by AI often does not require traditional backlinks. Brands and sources must optimise not only to rank but to be cited, recommended, or summarised by AI models.

Impact on publishers and content creators

  • Referral traffic is in decline. Organic click-through rates for high-ranking results commonly fall below 1% when AI Overviews are present, and premium features may even cannibalise paid traffic.
  • Discoverability means being the answer, not just the link. AI answers pull from web pages, structured data, FAQs, topic clusters, and proprietary research — not simply the highest-PageRank sites.
  • Authority and E-E-A-T (Expertise, Experience, Authoritativeness, Trustworthiness) remain foundational, but prominence is determined by context and semantic signals as much as by old-school ranking.
  • Unique, reference-worthy content wins. Proprietary research, clear frameworks, and quotable commentary increase chances of being used in AI-generated summaries and overviews.
  • Brand mention and citation are the new SEO currency. AI ‘citability’ depends on having your brand or expertise mentioned in multiple reputable sources and clearly attributed within structured content, even if that mention doesn’t translate to a direct hyperlink.

The multi-channel imperative: Why data access methods must coexist

The future of content discovery operates across multiple simultaneous pathways, each serving different AI consumption patterns and user needs. Organisations cannot rely on a single distribution method—success requires parallel optimisation across web crawling, API exposure, agent-to-agent protocols, and real-time streaming.

Why multiple methods are essential

Audience fragmentation

Different user segments prefer different AI platforms and search modalities. While younger users gravitate toward ChatGPT and conversational interfaces, enterprise users often rely on structured API integrations and internal agent systems.

Technical diversity

AI systems have varying technical capabilities and constraints. Web crawlers excel at comprehensive data collection but struggle with real-time updates. APIs provide structured, reliable data but require explicit integration partnerships. A2A protocols enable sophisticated agent collaboration but demand protocol adoption.

Risk mitigation

Dependence on any single distribution channel creates vulnerability. Organisations that relied exclusively on Google organic search have experienced dramatic traffic losses with AI Overview rollouts. Multi-channel strategies provide resilience against algorithm changes and platform policy shifts.

Content format optimisation

Different distribution methods favour different content structures. Web-crawled content benefits from semantic HTML and schema markup. API-distributed content requires structured JSON formats. A2A communications operate best with contextual metadata and negotiated data schemas.

Distribution channel synergies

Cross-channel reinforcement

Brands mentioned consistently across web content, API responses, and agent interactions build stronger authority signals. AI systems increasingly validate information by cross-referencing multiple sources and access methods.

Content lifecycle management

A single piece of content can be optimised for multiple distribution channels simultaneously. A research report might be published as structured web content for crawling, made available via API for enterprise integrations, and prepared for agent-to-agent sharing through contextual summaries.

Performance amplification

Multi-channel distribution creates compound visibility effects. Content that performs well in web search often receives increased API requests and agent citations, creating positive feedback loops across distribution methods.

ai brain icon double exposed over a laptop

Testing: The foundation of AI-ready content strategy

In the rapidly evolving AI search landscape, systematic testing becomes critical for understanding which content strategies, formats, and distribution methods drive visibility and engagement. Traditional SEO testing approaches must be expanded to encompass AI-specific metrics and multi-channel optimisation.

AI visibility testing framework

Baseline Measurement: Organisations must establish current AI visibility across major platforms. This involves systematically querying ChatGPT, Perplexity, Google AI Overviews, and other AI systems with industry-relevant questions to document current brand mention rates, citation frequency, and competitive positioning.

A/B Testing for AI Optimisation: Traditional A/B testing must be adapted for AI contexts. This includes testing different content structures (FAQ vs. narrative vs. technical documentation), schema markup implementations, content freshness strategies, and citation attribution methods to determine which approaches maximise AI visibility.

Multi-Platform Testing: Content performance varies significantly across AI platforms. Testing reveals that content optimised for ChatGPT citations may not perform well in Perplexity responses, requiring platform-specific optimisation strategies.

Semantic Testing: AI systems interpret content semantically rather than through keyword matching. Testing different semantic approaches—entity-focused vs. topic-cluster vs. question-answer formats—reveals which content structures AI systems prefer for specific query types.

Testing infrastructure requirements

Automated Monitoring Systems: Manual testing becomes impractical at scale. Organisations need automated systems that regularly query AI platforms with relevant questions and track changes in brand mentions, citation rates, and competitive positioning over time.

Multi-Channel Analytics: Testing requires an analytics infrastructure that can track performance across web crawling (traditional organic search), API integrations (enterprise systems), and agent-to-agent communications. Each channel requires different measurement approaches and success metrics.

Content Performance Attribution: Advanced testing setups attribute content performance to specific optimisation techniques. This might involve tracking how schema markup affects AI citations, how content freshness impacts mention rates, or how structured data influences agent recommendations.

Competitive Intelligence: Testing frameworks must include competitive analysis capabilities. Understanding how competitors optimise for AI visibility provides benchmarks and reveals market opportunities for differentiated content strategies.

Critical testing areas

Content Structure Testing: Systematic testing of different content formats reveals which structures AI systems prefer. This includes testing bullet points vs. paragraphs, table formats vs. narrative descriptions, and FAQ sections vs. integrated question-answer content.

Freshness vs. Authority: AI systems balance content freshness against domain authority. Testing helps determine optimal content update frequencies and strategies for maintaining AI visibility while building long-term authority signals.

Citation Attribution: Testing different citation and attribution methods reveals which approaches increase the likelihood of AI systems crediting your organisation. This includes testing byline formats, source attribution methods, and expertise signal optimisation.

Cross-Platform Consistency: Testing content performance across multiple AI platforms reveals which optimisation techniques have universal applicability versus platform-specific requirements.

Testing-driven optimisation cycles

Continuous Iteration: AI search algorithms evolve rapidly, requiring continuous testing and optimisation. Organisations must establish regular testing cycles that identify performance changes and optimisation opportunities.

Feedback Loop Integration: Testing data must feed back into content creation processes. Writers and content strategists need regular reports on which content structures and topics drive the highest AI visibility to guide future content development.

Predictive Testing: Advanced testing approaches use performance data to predict content success before publication. This involves analysing historical patterns to identify content characteristics that correlate with strong AI visibility.

a diverse and inclusive tech team

How organisations should prepare: Concrete steps

For publishers and marketers

  1. Structure for Machine and Human Readability: Use semantic HTML, clear headings, schema, tables, and lists to maximise data extractability by AI systems.
  2. Move Beyond Keywords: Prioritise intent and topic modelling; target longer, conversation-like query phrases and structure content to answer clearly.
  3. Publish Original Research: Invest in studies, benchmarking, and case studies that can serve as reference points for AI engines.
  4. Monitor Brand Visibility in AI Responses: Use tools for tracking AI citations and sentiment, not just Google SERP rankings.
  5. Optimise for Zero-Click Features: Secure featured snippets, People Also Ask, and knowledge panel spots by answering common user queries up-front.
  6. Continuous Content Auditing: Regularly update and re-optimise to remain relevant as AI engines continuously retrain on fresh data.
  7. Implement Multi-Channel Distribution: Develop content strategies that work across web crawling, API access, and agent-to-agent protocols simultaneously.
  8. Establish AI Visibility Testing Programs: Create systematic testing frameworks to measure and optimise content performance across AI platforms.

For enterprises and large organisations

  1. Implement an AI Readiness Framework: Align AI ambitions with business strategy, build data and infrastructure readiness, and cultivate AI-literate teams and governance models.
  2. Data Accessibility and Quality: Ensure proprietary databases, documentation, and knowledge repositories are structured, permissioned, and AI-accessible via APIs or agent protocols.
  3. Internal Agent Collaboration: Adopt A2A protocols for knowledge discovery, workflow automation, and context-rich interactions between enterprise AI agents.
  4. Leverage and Monitor Structured Data: Deploy advanced schema and structured data organisation to enhance both internal and external AI integration.
  5. Diversify Traffic and Discovery Pathways: Reduce reliance on a single channel (e.g., Google search) by building up email, app-based delivery, owned platforms, and partnership distribution.
  6. Build Multi-Modal Content Strategies: Prepare content for distribution across web crawling, API integration, and agent-to-agent communication channels.
  7. Develop AI Testing Capabilities: Establish infrastructure for measuring AI visibility, conducting multi-platform testing, and optimising content for AI consumption.

Testing tools and measurement approaches

AI visibility tracking tools

  • Ziptie.dev: Monitors brand mentions across AI platforms with sentiment analysis and competitive benchmarking
  • Rankshift.ai: Measures visibility in AI responses and provides competitor comparison data
  • SE Ranking AI Visibility Tracker: Tracks brand mentions and links in AI answers with competitive analysis
  • Custom Looker Studio Dashboards: Free approach using Google Analytics data to track AI referral traffic

A/B testing for AI optimisation

  • Multi-Armed Bandit Algorithms: Real-time optimisation of content variations based on AI citation performance
  • Semantic Testing Frameworks: Systematic testing of different content structures and semantic approaches
  • Cross-Platform Performance Testing: Comparative analysis of content performance across different AI systems

Analytics infrastructure

  • Multi-Channel Attribution: Tracking systems that measure performance across web, API, and agent-to-agent distribution
  • Real-Time Monitoring: Automated systems for tracking changes in AI visibility and competitive positioning
  • Predictive Performance Analysis: Tools that forecast content success based on historical AI citation patterns

Why readiness is now critical

Organisations that act now to future-proof their web presence and information architecture will be cited, recommended, and surfaced by AI platforms — the new drivers of discovery and reputation. Those who delay can expect further erosion of their audiences, influence, and opportunity for monetisation as search shifts from lists of links to directly answered needs.

The competitive advantage goes to organisations that understand the technical differences between content consumption methods and optimise accordingly. Multi-channel strategies combined with systematic testing provide the foundation for sustained AI visibility and brand authority.

AI's impact on search is both a threat and an opportunity

With zero-click searches rising, AI-generated summaries and agentic ecosystems are fundamentally reshaping discoverability, rankings, and brand value online. Success now hinges on creating original, structured, reference-worthy information, enabling AI compatibility and visibility, and adopting an enterprise-wide AI readiness posture to thrive amidst this revolution.

The organisations that will succeed are those that embrace the complexity of multi-channel distribution, invest in systematic testing frameworks, and continuously adapt their strategies based on performance data. The future belongs to brands that are discoverable, citable, and valuable across all forms of AI-powered search and discovery.