Skip to content

Performance

Get the lowest latency from Hummingbird APIs.

Connection Reuse

The biggest performance gain comes from reusing connections. Hummingbird supports HTTP/2 with TLS 1.3, which allows connection coalescing and multiplexing.

Why It Matters

Request TypeTCPTLSTotal Overhead
First request~30ms~80ms~110ms
Subsequent (same connection)0ms0ms0ms

By reusing connections, subsequent requests skip TCP and TLS handshakes entirely.

Node.js

Node.js fetch and http modules automatically reuse connections via the default agent. For high-throughput applications, configure the agent explicitly:

javascript
import { Agent } from 'undici';

// Create a persistent agent with connection pooling
const agent = new Agent({
  keepAliveTimeout: 30_000,     // Keep connections alive for 30s
  keepAliveMaxTimeout: 60_000,  // Max keep-alive time
  connections: 10,              // Connection pool size
});

// Reuse the same agent for all requests
async function geoLookup(ip) {
  const response = await fetch(
    `https://api.hummingbirdapi.com/v1/geo/lookup?ip=${ip}`,
    {
      headers: { 'X-API-Key': process.env.HUMMINGBIRD_API_KEY },
      dispatcher: agent,
    }
  );
  return response.json();
}

For Express.js or other long-running servers, the default behavior is usually sufficient since the process stays alive.

Python

Use requests.Session to reuse connections:

python
import os
import requests

# Create a session (reuses connections automatically)
session = requests.Session()
session.headers['X-API-Key'] = os.environ['HUMMINGBIRD_API_KEY']

def geo_lookup(ip):
    response = session.get(
        'https://api.hummingbirdapi.com/v1/geo/lookup',
        params={'ip': ip}
    )
    return response.json()

# All calls reuse the same connection
geo_lookup('8.8.8.8')
geo_lookup('1.1.1.1')

For async Python with httpx:

python
import os
import httpx

# Create a client with connection pooling
client = httpx.Client(
    headers={'X-API-Key': os.environ['HUMMINGBIRD_API_KEY']},
    http2=True,  # Enable HTTP/2
    timeout=10.0,
)

def geo_lookup(ip):
    response = client.get(
        'https://api.hummingbirdapi.com/v1/geo/lookup',
        params={'ip': ip}
    )
    return response.json()

Go

Use a shared http.Client:

go
package main

import (
    "net/http"
    "time"
)

// Create a client with connection pooling (reuse globally)
var client = &http.Client{
    Transport: &http.Transport{
        MaxIdleConns:        10,
        MaxIdleConnsPerHost: 10,
        IdleConnTimeout:     30 * time.Second,
    },
    Timeout: 10 * time.Second,
}

func geoLookup(ip string) (*http.Response, error) {
    req, _ := http.NewRequest("GET",
        "https://api.hummingbirdapi.com/v1/geo/lookup?ip="+ip, nil)
    req.Header.Set("X-API-Key", os.Getenv("HUMMINGBIRD_API_KEY"))
    return client.Do(req)
}

PHP

For PHP with cURL, reuse the handle:

php
<?php
// Create a reusable cURL handle
$ch = curl_init();
curl_setopt_array($ch, [
    CURLOPT_RETURNTRANSFER => true,
    CURLOPT_HTTPHEADER => [
        'X-API-Key: ' . getenv('HUMMINGBIRD_API_KEY'),
    ],
    // Keep connection alive
    CURLOPT_TCP_KEEPALIVE => 1,
    CURLOPT_TCP_KEEPIDLE => 30,
]);

function geoLookup($ip) {
    global $ch;
    curl_setopt($ch, CURLOPT_URL,
        "https://api.hummingbirdapi.com/v1/geo/lookup?ip=" . urlencode($ip));
    $response = curl_exec($ch);
    return json_decode($response, true);
}

// Multiple calls reuse the connection
geoLookup('8.8.8.8');
geoLookup('1.1.1.1');

// Close when done
curl_close($ch);

Ruby

Use a persistent connection with net/http:

ruby
require 'net/http'
require 'json'

# Create a persistent connection
uri = URI('https://api.hummingbirdapi.com')
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.keep_alive_timeout = 30

def geo_lookup(http, ip)
  request = Net::HTTP::Get.new("/v1/geo/lookup?ip=#{ip}")
  request['X-API-Key'] = ENV['HUMMINGBIRD_API_KEY']
  response = http.request(request)
  JSON.parse(response.body)
end

# All calls reuse the connection
geo_lookup(http, '8.8.8.8')
geo_lookup(http, '1.1.1.1')

Or use the faraday gem with persistent connections:

ruby
require 'faraday'

conn = Faraday.new(url: 'https://api.hummingbirdapi.com') do |f|
  f.adapter :net_http_persistent  # Persistent connections
  f.headers['X-API-Key'] = ENV['HUMMINGBIRD_API_KEY']
end

def geo_lookup(conn, ip)
  response = conn.get('/v1/geo/lookup', ip: ip)
  JSON.parse(response.body)
end

Request Batching

For multiple IP lookups, consider batching:

javascript
// Instead of sequential requests
const results = [];
for (const ip of ips) {
  results.push(await geoLookup(ip)); // Slow: waits for each
}

// Use parallel requests
const results = await Promise.all(
  ips.map(ip => geoLookup(ip)) // Fast: all run concurrently
);

Rate Limits

Parallel requests still count against your rate limits. Stay within your plan's per-second limit.

Response Caching

For IP addresses that don't change frequently (e.g., server IPs), cache responses client-side:

javascript
const geoCache = new Map();
const CACHE_TTL = 60 * 60 * 1000; // 1 hour

async function geoLookupCached(ip) {
  const cached = geoCache.get(ip);
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }

  const data = await geoLookup(ip);
  geoCache.set(ip, { data, timestamp: Date.now() });
  return data;
}

Latency by Region

Hummingbird runs on Cloudflare's global edge network. Latency depends on your distance to the nearest edge location:

RegionTypical Latency
North America10-50ms
Europe15-60ms
Asia Pacific20-80ms
South America30-100ms

Requests are automatically routed to the nearest edge location.

Measuring Performance

Use timing headers to measure actual API latency:

bash
curl -w "\n--- Timing ---
DNS:        %{time_namelookup}s
Connect:    %{time_connect}s
TLS:        %{time_appconnect}s
First Byte: %{time_starttransfer}s
Total:      %{time_total}s
" \
  -H "X-API-Key: $HUMMINGBIRD_API_KEY" \
  "https://api.hummingbirdapi.com/v1/geo/lookup?ip=8.8.8.8"

For the most accurate measurements, run multiple requests and observe the difference between the first (cold) and subsequent (warm) requests.

Summary

OptimizationLatency Saved
Connection reuse80-150ms per request
Parallel requests(n-1) * request_time
Client-side cachingFull round-trip

For most applications, connection reuse provides the biggest improvement with minimal code changes.

Built with VitePress