Get started with ngrok's API gateway
You've developed a world-class API, and now you want to make it available online.
Aside from the challenges involved in any API gateway deployment, you're looking for a solution that allows you to:
- Consistently apply security and traffic management policy in one place
- Provide a single pane of glass for observability
- Work identically in every cloud or environment
What you'll learn
In this tutorial, you'll learn how to implement ngrok as an API gateway with these broad steps:
- Set up the common pattern for ngrok's API gateway.
- Create one or more internal agent endpoints for your upstream API services.
- Create a cloud endpoint to centralize your traffic management policies, including how to forward traffic to your internal agent endpoint.
What you'll need
- An ngrok account: Sign up if you don't already have one.
- Your authtoken: Create an authtoken using the ngrok dashboard.
- A reserved domain: Reserve a domain
in the ngrok dashboard or using the ngrok
API.
- You can choose from an ngrok subdomain or bring your own custom branded
domain, like
https://api.example.com
. - We'll refer to this domain as
<YOUR_NGROK_DOMAIN>
throughout the guide.
- You can choose from an ngrok subdomain or bring your own custom branded
domain, like
- The ngrok agent: Download the appropriate version and install it on the same machine or network as the API service you want to make available via ngrok's API gateway.
- (optional) An API key: Create an ngrok API key if you'd like to use the ngrok API to manage your cloud endpoints.
Deploy a demo API service (optional)
If you don't yet have API services you'd like to bring online with an API gateway, or just want to quickly wire up a POC using ngrok, we recommend our ngrok demo API, which responds with details about the request.
Assuming you have Docker installed on the systems where your API services run,
you can deploy a container listening on port 4000
.
Loading…
Start up a second container on port 5000
.
Loading…
Create an internal agent endpoint
Your upstream API service needs a way of receiving traffic from the ngrok
network, which you can establish with an agent endpoint on an internal URL like
https://abc.internal
. Replace 4000
if you've brought your own API service.
Loading…
Your API service isn't yet accessible on the public internet. To fix that, you need two things:
- A cloud endpoint for traffic routing and centralized policy management.
- A Traffic Policy rule that forwards traffic from your cloud endpoint to
https://abc.internal
.
Create a cloud endpoint
Cloud endpoints are persistent, always-on endpoints that you can manage with the ngrok dashboard or API.
You centrally control your traffic management and security policy on your cloud endpoint, then forward traffic to your endpoint pool. That's much easier than managing these policies for each service separately and trying to keep them in sync to ward off configuration drift.
- Dashboard
- API
First, log into the ngrok dashboard. Click Endpoints → + New.
Leave the Binding value Public, then enter the domain name you reserved earlier. Click Create Cloud Endpoint.
With your cloud endpoint created, you'll see a default Traffic Policy in the dashboard. Paste in the YAML below to apply the rule.
Loading…
Click Save to apply your changes.
The ngrok
CLI provides a helpful wrapper around the ngrok API, which you can use to create a cloud endpoint and apply a file containing Traffic Policy rules.
Create a new file named policy.yaml
on your local workstation with the following YAML.
Loading…
Create a cloud endpoint on {YOUR_NGROK_DOMAIN}
, passing your policy.yaml
file as an option.
Loading…
You'll get a 201
response—save the value of id
, as you'll need it again later to continue configuring the Traffic Policy applied to your cloud endpoint.
At this point, your API gateway is ready for traffic! Time to give it a go.
Loading…
If you're using our demo API, you'll see a response like:
Loading…
Route traffic to multiple services with Traffic Policy
Your API gateway is ready, but chances are you need to handle ingress into more than a one API service.
If you're using our demo API service, fire up another internal agent endpoint to route traffic to the second container on port 5000
.
Loading…
Enter our Traffic Policy system, which lets you filter traffic based on its properties and take action as it passes through ngrok's global network. Two important concepts of Traffic Policy to note:
- Phases are the distinct points in the lifecycle of
a request where you can filter and take action. For this use case, we're using
on_http_request
, which activates when ngrok receives an HTTP request over an established connection. - Expressions define when to run
your actions. They're written in Common Expression
Language, and must evaluate to
true
to run the corresponding action.
The rules below:
- Filter for requests arriving on only
https://<YOUR_NGROK_DOMAIN>/abc
and forward them to your internal agent endpoint athttps://abc.internal
. - Filter for requests arriving on only
https://<YOUR_NGROK_DOMAIN>/xyz
and forward them to your internal agent endpoint athttps://xyz.internal
.
You can also route by other properties, like subdomains and headers.
- Dashboard
- API
Copy and paste the rules below into your cloud endpoint's Traffic Policy editor
in the dashboard. If you're bringing your own API services instead of using the
demo API, you'll need to change /abc
and /xyz
to match your services' paths
and the url
for your internal agent endpoints.
Loading…
Hit Save to lock in the new policy.
Update your existing policy.yaml
file with the YAML below. If you're
bringing your own API services instead of using the demo API, you'll need to
change /abc
and /xyz
to match your services' paths and the url
for
your internal agent endpoints.
Loading…
Update your cloud endpoint.
Loading…
You can now curl
different paths to see your requests routed to the
appropriate upstream API service.
Loading…
You should get a response like:
Loading…
And when you run curl https://<YOUR_NGROK_DOMAIN>/xyz
?
Loading…
Add traffic management policies
Your API gateway routes traffic, but doesn't yet do essential work of an API gateway: offload all the non-functional requirements away from your services.
One great feature of ngrok's building blocks of endpoints and Traffic Policy rules is that they're composable—you can reuse them, chain them, and apply them at multiple stages in the lifecycle of an API request.
With the shape you've already created, you can centrally manage certain policies, like authentication, on your cloud endpoint, then compose additional rules onto specific services.
Validate JWTs on all APIs and requests
API authentication is too important not to apply consistently across all your
APIs and requests. That's where the always-on, front door to all your routes
quality of a cloud endpoint comes in handy—you can apply the jwt-validation
action once for dependable AuthN
no matter how many services you end up deploying behind your multicloud API
gateway.
ngrok's JWT validation action helps you:
- Give your end users many ways to access your APIs.
- Ensure only requests containing the correct access token, specified by an
Authorization: Bearer ...
header, can access any of your APIs. - Add claims to tokens for authorization and fine-grained access control where a
specific token may only have access to a certain API (
service_access: abc
) or apply RBAC (features: read
). - Use a single credential for end users who need to access multiple upstream services.
- Offload all this logic from your API services and run it in ngrok's network.
You can use any OAuth provider for JWT validation, but but let's quickly cover the process with Auth0.
- Log in to your Auth0 tenant dashboard.
- Select Applications > APIs, then + Create API.
- Name your API whatever you'd like.
- Replace the value of the Identifier field with
<YOUR_NGROK_DOMAIN>
. - Leave the default values for JSON Web Token (JWT) Profile and JSON Web Token Signing Algorithm.
- Click Create.
- Navigate to your application and click on the Test tab, where you can find a signed, fully functional JWT and examples of how to programmatically generate more.
The rule below builds on top of the previous cloud endpoint policy to:
- Reject requests missing a token with a
401 Unauthorized
error. - Reject requests with an invalid token with a
403 Forbidden
error. - Forward requests with a valid token to one of your internal agent endpoints based on the pathname.
You'll need to change the variables accordingly—if you're not sure where to find this information, we have a full integration guide with more details.
Loading…
Apply in either the dashboard or ngrok API.
Rate limit specific API services
Let's say one of your services (like abc
on port 4000
, if you're following
along with the demo service), needs additional protection from unintentional
misuse and malicious attacks.
The rate-limit
Traffic Policy
action allows you to reject requests
with a 429
error code once a user or group have exceeded your customizable
threshold.
Create a new file named abc-policy.yaml
on the system where you're running
your API services and ngrok agent and paste in the YAML below.
The rule below creates new policy at your agent to:
- Allow up to
10
requests per IP in a60s
window. - Reject requests that exceed the rate limiting capacity with a
429
error response.
Loading…
Restart the ngrok agent to enable abc-policy.yaml
.
Loading…
Ready to test your rate limit in action? Run the below command after replacing
<YOUR_NGROK_DOMAIN>
and the path, if relevant.
Loading…
You'll see a few normal responses until you hit the rate limit, and then you'll
see 429
errors. Run the same command on the /xyz
path and you won't see the
same errors, since you've applied this policy only to the
https://abc.internal
agent endpoint.
If you want all your APIs to have a consistent rate limiting strategy, you can
move the rule to your cloud endpoint above the jwt-validation
action.