Blog / Benchmarks
secapi.ai vs SEC-API.io: An Independent Benchmark Comparison
We ran structured, reproducible benchmarks across four core SEC-data workflows: entity resolution, filing search, XBRL fact retrieval, and insider trade queries. Every test used the same inputs, the same machine, and the same measurement methodology. Here are the results.
Methodology
How we measured
Each benchmark runs 50 sequential requests per endpoint, per provider. We measure p50 and p95 server-side latency (excluding network), response payload size in bytes, and correctness of returned data. All tests run from the same machine, same network, same time window. Cold-start requests are excluded. Full methodology and raw data are published in the docs.
Results
Head-to-head results
Entity Resolution
3.9x faster p50 latency
Resolving AAPL to a canonical entity: secapi.ai returns in ~45ms p50 vs ~175ms for SEC-API.io. The response includes CIK, FIGI, ISIN, CUSIP, exchange, sector, and SIC code.
Filing Search
3.9x faster p50 latency
Searching for AAPL 10-K filings: secapi.ai returns in ~90ms p50 vs ~350ms for SEC-API.io. Payloads are 60% smaller with structured metadata.
XBRL Facts
6.4x faster p50 latency
Retrieving structured XBRL facts: secapi.ai returns in ~55ms p50 vs ~350ms for SEC-API.io. Pre-parsed from our own database, not fetched on-demand from EDGAR.
Insider Trades
4.1x faster p50 latency
Querying insider transactions: secapi.ai returns in ~65ms p50 vs ~265ms for SEC-API.io. Includes transaction type, ownership type, and reporting owner details.
Payload efficiency
Smaller payloads mean faster agent workflows
Across all four benchmarks, secapi.ai returns 40-70% smaller payloads. For agent workflows that make hundreds of calls per session, this compounds into meaningful token savings and faster end-to-end completion times. Every response includes freshness timestamps and provenance metadata that alternatives omit.
- Entity resolution: 60% smaller payload
- Filing search: 55% smaller payload
- XBRL facts: 70% smaller payload
- Insider trades: 40% smaller payload
Caveats
What these benchmarks do and do not show
These benchmarks measure specific workflows on specific dates. Performance varies by endpoint, query complexity, and server load. We publish the methodology so you can reproduce the tests. We do not claim universal superiority -- we claim measurable wins on the workflows that agent-heavy SEC data consumers repeat most often.
See for yourself
250 free calls per month. Run your own benchmarks against any alternative.