Back to blog
google places apinearby searchradiusdeveloper

Google Places API Nearby Search Radius: Maximum, Limits, Workarounds

How the Google Places API Nearby Search radius works — the 50,000 m maximum, edge cases, workarounds, and when to switch to MapsLeads.

MapsLeads Team2026-05-0211 min read

If you have ever tried to enumerate every business in a metro area using the Google Places API, you have run into the same wall everyone else does: the google places api nearby search radius is capped at 50,000 meters. That single constant has shaped how thousands of scrapers, lead generators, and mapping tools are architected. It sounds generous on paper. In practice it is the gotcha that turns a one-line API call into a tiling pipeline, a deduplication layer, and a quota spreadsheet.

This article walks through what the radius parameter actually does, why the 50,000 m maximum exists, what happens at the edges, and the workarounds engineers reach for when an area of interest is bigger than a single circle. We will also look at when it stops being worth fighting the API and starts being cheaper to use a service that has already solved this problem.

What "radius" does in Nearby Search

Nearby Search is one of the original endpoints of the Places API. You give it a latitude and longitude pair, a radius in meters, and optionally a type or keyword. It returns up to 60 results, paginated 20 at a time across three pages, ranked by what Google calls "prominence" — a mix of popularity, ratings, and proximity.

The radius parameter defines a circular bias around your center point. Google scans places within that circle and surfaces what it considers the most prominent matches. The key word is bias. Even with a 5 km radius, you do not get every coffee shop in a 5 km circle. You get up to 60 of the ones Google decides are most relevant. That distinction is the second gotcha after the 50,000 m cap, and it shapes everything that follows.

Radius also interacts with location bias in subtle ways. A small radius in a dense downtown might still hit the 60-result ceiling immediately, while a 50 km radius in a rural area might return ten results and stop. The endpoint is not built for exhaustive enumeration. It is built for "show me what is around me," which is a different problem.

The 50,000 m maximum (and why)

The official documentation states it plainly: the radius parameter accepts values from 0 to 50,000 meters. That is fifty kilometers, or roughly 31 miles. Anything larger is rejected or silently clamped depending on which exact endpoint and version you are calling.

Why 50 km? Google has never given a public technical justification, but the practical reasons are easy to infer. First, the result set is hard-capped at 60 places regardless of area. A larger radius would only mean more disappointment, since you would scan a bigger area and still only see 60 entries. Second, the prominence ranking degrades as the area grows — at continental scale, "prominence" stops being meaningful. Third, server-side spatial queries get expensive fast as the search circle grows, and Google tunes its public endpoints for predictable latency.

The 50 km cap is also one of the limits that has stayed remarkably stable across every API revision, including the move from the legacy Places API to the new Places API (sometimes called Places API v1 or "New Places"). The cap survived. If you are looking for the broader picture of what else changed and what stayed the same, see the Google Maps API limits explained breakdown.

Edge cases: rankBy=distance ignores radius

There is one important exception to the radius rules. If you set rankby=distance instead of the default prominence ranking, the radius parameter is no longer accepted. You must omit it. In that mode, results are ordered strictly by distance from the center point, and Google decides the cutoff itself, which is typically a few kilometers in dense areas.

This trips up a lot of developers. They try to combine rankby=distance with radius=50000 hoping to get distance-ordered results across a large area, and the API rejects the request. The two options are mutually exclusive. You either get prominence-ranked results inside an explicit circle, or distance-ranked results inside an implicit Google-decided window. Pick one.

There are also smaller edge cases worth knowing. A radius of 0 is technically valid but useless. Negative values are rejected. Floating-point values are accepted and rounded. And the center point is checked against the radius before any results are returned, so a center outside Google's coverage area returns zero rows regardless of radius.

What happens above 50,000 m: request rejected vs ignored

Behavior here depends on which API surface you are hitting. The legacy /maps/api/place/nearbysearch endpoint returns an INVALID_REQUEST status when radius is greater than 50,000. The error message is explicit: "Search radius should not exceed 50000 meters." No partial result, no clamp, just a rejection.

The new Places API behaves slightly differently. The places:searchNearby method takes a locationRestriction object containing a circle, and the circle has a radius in meters. Send anything above 50,000 there and you get an HTTP 400 with a structured error pointing to the offending field. Again, no silent clamp.

There used to be a folklore claim that the API would silently ignore values above 50 km and treat them as 50 km. That was true for some very old client libraries that clamped client-side, but it is no longer true of the live API. If you hit the wall, you will know — your job will fail, not silently truncate. That is actually the better failure mode, because silent clamps were producing maps with mysterious doughnut holes for years.

Workaround one: tiling and hex grids

The standard workaround is to split your area of interest into a grid of smaller circles, each within the 50 km cap, and run a Nearby Search for each cell. Two grid shapes dominate.

A square tile grid is the easy version. Pick a tile size, say 5 km, walk a latitude/longitude grid across your bounding box, and fire a request at each cell center with radius equal to the tile half-diagonal. It works, but it overlaps neighbors and double-counts businesses on the seams. You then deduplicate by place_id.

A hex grid is the more elegant version. Hexagons tile a plane with no gaps and minimal overlap, so for a given coverage you make fewer requests. Libraries like Uber's H3 give you ready-made hex indexing at multiple resolutions. At H3 resolution 7, each hex is roughly 5 km across, which fits cleanly inside a 50 km radius circle with margin to spare. You compute the hex centers covering your AOI, query each, dedupe by place_id, done.

Either way, you must dedupe. Even with hexes you will see the same place_id from multiple cells when a business sits near a boundary. And because Nearby Search caps at 60 results per call, you also need your tile size small enough that no single tile contains more than 60 matching businesses, or you will silently miss results in dense areas. In a downtown core, that often means dropping to 1 km or even 500 m tiles for restaurant queries.

Workaround two: Text Search vs Nearby Search

Text Search is the other Places endpoint, and it has different rules. It accepts a query string like "coffee in Brooklyn" and returns up to 60 results biased by an optional location and radius. The radius behaves more like a soft hint than a hard boundary, and the query string can carry the geographic constraint instead.

For some workloads, swapping Nearby Search for Text Search lets you sidestep the radius cap altogether. A query like "dentist in Berlin" returns relevant Berlin dentists without you having to tile the city yourself. The downside is the same 60-result ceiling, plus the fact that Text Search costs more per call and ranks results differently. It is a useful tool when your AOI lines up with a named place, and a poor tool when your AOI is an arbitrary polygon.

Workaround three: switching to a SaaS

At some point the engineering effort of building a tiler, deduper, retry layer, and quota tracker outweighs the cost of using a service that has done it already. That break-even point comes faster than most teams expect. A proper coverage pipeline for one country is weeks of work, ongoing maintenance, and a Google Cloud bill that grows with every dense city you add. The detailed comparison lives in MapsLeads vs Google Places API direct, and the underlying extraction problem is covered in Bulk Google Maps data extraction.

Cost implications of tiling

Tiling does not just multiply engineering complexity. It multiplies cost linearly with the number of cells. Cover a country with 5 km hexes and you are looking at tens of thousands of Nearby Search calls per category. Each call is billed at the Nearby Search SKU rate, which at the time of writing is about $32 per thousand for the basic data tier and significantly more once you add Contact or Atmosphere fields.

Even before you fetch contact details, a national-scale tiling job for a single business category can run into the hundreds or low thousands of dollars. Add a second category and the cost doubles. Add Place Details lookups for emails and websites and the cost can triple again. The 50 km radius cap is not just a coding inconvenience — it is the reason exhaustive Google Places extraction is expensive at scale.

How MapsLeads handles area coverage at scale

MapsLeads was built specifically because the tiling-and-dedup loop is the wrong abstraction for lead generation. You do not want to think in 50 km circles. You want to think in cities, regions, or countries.

Searches in MapsLeads accept a city, a region, or a country as the geographic input. There is no radius parameter, no manual tiling, no hex math, no dedup logic. Behind the scenes the platform handles coverage so you do not have to. You pick "Restaurants in Paris" or "Plumbers in Île-de-France" and you get the full set, not a 60-result slice of one circle.

Pricing is credit-based and predictable. The Base record costs 1 credit and includes name, address, phone, website, category, rating, and review count. Contact Pro adds emails and decision-maker signals for 1 extra credit. Reputation adds review-derived insights for 1 extra credit. Photos adds extracted imagery for 2 extra credits. There is no per-request multiplier, no surprise SKU change when you cross a city boundary, no quota math at the end of the month.

For teams that have been hand-rolling Places API tilers, the most striking difference is operational. You stop maintaining a tiler. You stop tracking deduplication ratios. You stop writing post-mortems about which dense neighborhoods got truncated at 60. You query a city, you get the city, and the cost is a clean number you can put in a quote. See Pricing for the full credit table.

FAQ

What is the maximum radius for Places Nearby Search? The maximum is 50,000 meters, or 50 kilometers. That cap applies to both the legacy Nearby Search endpoint and the new Places API searchNearby method.

Can I exceed 50,000 m in a single request? No. Requests above 50,000 m are rejected with an INVALID_REQUEST or HTTP 400 error depending on the API version. There is no flag, paid tier, or workaround that lifts the cap on a single call.

What is the best workaround for a large area of interest? Tile the AOI with a hex or square grid where each cell fits inside the 50 km cap, run a Nearby Search per cell, and deduplicate by place_id. For very dense areas, drop tile size below 5 km so you do not hit the 60-result-per-call ceiling.

Is there a Places API alternative for big areas? Yes. Text Search avoids manual tiling when your AOI is a named place. For exhaustive coverage at city, region, or country scale, a service like MapsLeads removes the tiling problem entirely and charges per record instead of per request.

Does rankby=distance use the radius parameter? No. When rankby is set to distance, the radius parameter must be omitted. The two options are mutually exclusive in Nearby Search.

Will Google ever raise the 50 km cap? It has been stable for more than a decade and survived the migration to the new Places API. There is no public roadmap suggesting it will change. Build your architecture assuming the cap is permanent.

Conclusion

The 50,000 m radius cap is not going anywhere. Every Places API integration that needs broad geographic coverage eventually builds a tiler, a deduper, and a cost dashboard around it. That is fine when you are doing one project. It becomes a tax when lead extraction is part of your weekly workflow.

If you are tired of writing radius math, get started with MapsLeads and pull a city in one query.