OMNI

Blog / Benchmarks

secapi.ai vs SEC-API.io: An Independent Benchmark Comparison

We ran structured, reproducible benchmarks across four core SEC-data workflows: entity resolution, filing search, XBRL fact retrieval, and insider trade queries. Every test used the same inputs, the same machine, and the same measurement methodology. Here are the results.

4 workflow benchmarks
Reproducible methodology
Dated capture: 2025
p50 and p95 latency

Methodology

How we measured

Each benchmark runs 50 sequential requests per endpoint, per provider. We measure p50 and p95 server-side latency (excluding network), response payload size in bytes, and correctness of returned data. All tests run from the same machine, same network, same time window. Cold-start requests are excluded. Full methodology and raw data are published in the docs.

Results

Head-to-head results

Payload efficiency

Smaller payloads mean faster agent workflows

Across all four benchmarks, secapi.ai returns 40-70% smaller payloads. For agent workflows that make hundreds of calls per session, this compounds into meaningful token savings and faster end-to-end completion times. Every response includes freshness timestamps and provenance metadata that alternatives omit.

  • Entity resolution: 60% smaller payload
  • Filing search: 55% smaller payload
  • XBRL facts: 70% smaller payload
  • Insider trades: 40% smaller payload

Caveats

What these benchmarks do and do not show

These benchmarks measure specific workflows on specific dates. Performance varies by endpoint, query complexity, and server load. We publish the methodology so you can reproduce the tests. We do not claim universal superiority -- we claim measurable wins on the workflows that agent-heavy SEC data consumers repeat most often.

See for yourself

250 free calls per month. Run your own benchmarks against any alternative.