Skip to main content

Command Palette

Search for a command to run...

REST, GraphQL, or gRPC: Choosing the Right API Paradigm

Pick the right tool for the job and avoid costly architectural mistakes.

Updated
14 min read
REST, GraphQL, or gRPC: Choosing the Right API Paradigm

The Problem

If you are new to backend engineering, the world of APIs can feel overwhelming. You learn REST because it is everywhere, the de facto standard for web services. Then you hear about GraphQL and how it solves all of REST's problems. Then someone on your team mentions gRPC for a new microservice, talking about performance and protocol buffers. And then there is tRPC, which seems to be popular in the full-stack JavaScript world.

How do you decide what to use? This is not an academic question. The choice you make has long-term consequences for your system's performance, scalability, and the productivity of your team. Picking the wrong paradigm can lead to slow applications, bloated data transfers, tight coupling between teams, and a frustrating developer experience.

As engineers, we need to understand our tools deeply. This post will break down the core ideas behind REST, GraphQL, and gRPC. We will look at their strengths, weaknesses, and the specific problems they are designed to solve. By the end, you will have a clear mental model and a practical framework for choosing the right API paradigm for your next project.

Core Concepts

An Application Programming Interface (API) is a contract that allows two pieces of software to communicate. When we talk about web APIs, we are usually referring to how a client (like a web browser or mobile app) communicates with a server over a network. Let's dissect the most common paradigms.

REST

REST stands for REpresentational State Transfer. It is not a protocol or a library; it is an architectural style built on top of the HTTP protocol. It has been the dominant style for over a decade for a reason: it is simple, and it models the web itself.

The core idea of REST is to think about your system in terms of resources. A resource is any object or entity you want to expose, like a User, a Product, or an Order. Each resource has a unique identifier, which is its URL (Uniform Resource Locator).

You interact with these resources using standard HTTP methods, which map directly to CRUD (Create, Read, Update, Delete) operations:

  • GET /users/1: Retrieve the user with ID 1.

  • POST /users: Create a new user.

  • PUT /users/1: Update the entire user with ID 1.

  • PATCH /users/1: Update part of the user with ID 1.

  • DELETE /users/1: Delete the user with ID 1.

A key constraint of REST is that it is stateless. Every request from a client to a server must contain all the information needed to understand and complete the request. The server does not store any client context between requests. This makes REST APIs highly scalable because any server can handle any request, which simplifies load balancing.

When to use PUT vs PATCH?

PATCH is often overlooked but important. While PUT replaces the entire resource, PATCH modifies only the fields you specify. For example, if you just want to update a user's email without changing their other data, PATCH is the correct choice. It leads to smaller payloads and more efficient updates in many real-world APIs.

Image Source: Hanzalah Waheed on X

Real-world Example: Most websites use REST internally for service integration. For example, authentication, billing, and notifications services often communicate via REST APIs even inside large systems.

Pros of REST:

  • Simplicity and Standardization: REST is easy to understand and implement. The learning curve is low, and the vast majority of web developers are familiar with it.

  • Leverages HTTP: It uses HTTP features, like caching for GET requests, to its full potential. Browsers, proxies, and CDNs can cache responses automatically, improving performance.

  • Wide Adoption: The tooling and ecosystem around REST are mature and extensive. Libraries exist for every language, and tools like OpenAPI (Swagger) make documentation and code generation straightforward.

  • Ideal for public APIs and service integration (e.g., Slack, Stripe, GitHub)

Cons of REST:

REST's simplicity is also its limitation. Two major problems often surface in complex applications:

  1. Over-fetching: The server defines the shape of the resource. When you request /users/1, you get the entire user object, which might include their name, email, address, creation date, and more. If your mobile app's homepage just needs to display the user's name, you are still fetching all the other data. This wastes bandwidth, which is critical for mobile users.

  2. Under-fetching and the N+1 Problem: Sometimes you need data from multiple resources. To show a user's profile and their last five orders, you first have to GET /users/1. Then, you have to make another request to GET /users/1/orders?limit=5. This requires multiple round trips to the server, which adds latency. It gets worse if you need to fetch a list of users and then fetch the orders for each user in that list (the N+1 query problem). These issues led to the development of GraphQL, which we will discuss next.

Example: REST with FastAPI

pip install "fastapi[standard]"
from fastapi import FastAPI
from fastapi.responses import JSONResponse

app = FastAPI()

users = {
    "1": {"name": "Anas Khan", "email": "anas@example.com"},
    "2": {"name": "Ninad Naik", "email": "ninad@example.com"},
}

@app.get("/users/{user_id}")
def get_user(user_id: str):
    user = users.get(user_id)
    if not user:
        return JSONResponse({"error": "User not found"}, status_code=404)
    return JSONResponse(user)

Run:

fastapi dev main.py

Test:

curl http://127.0.0.1:8000/users/1

Expected Output:

{"name": "Anas Khan", "email": "anas@example.com"}

GraphQL

GraphQL is not an architectural style but a query language for your API. Developed by Facebook and open-sourced in 2015, it was created to solve the exact problems of over-fetching and under-fetching that their mobile teams were facing.

With GraphQL, the client has the power. Instead of having multiple endpoints for different resources, you typically have a single endpoint (e.g., /graphql). The client sends a POST request to this endpoint with a query that specifies exactly the data it needs, including nested relationships.

The server then returns a JSON object that mirrors the structure of the query.

Consider the user and orders example. With GraphQL, the client can get all the required data in a single request:

Example query:

query {
  user(id: "1") {
    name
    email
  }
}

The server response will be a JSON object containing only the user's name and email. Nothing more, nothing less.

GraphQL is also strongly typed. You define the capabilities of your API in a schema using the GraphQL Schema Definition Language (SDL). This schema acts as a contract between the client and the server, and it enables powerful developer tools like auto-completion and static validation of queries.

Real-world Example: LeetCode uses GraphQL. Open your browser's DevTools, switch to the Network tab, and filter for /graphql. You will see the queries being made to fetch problem lists, discussions, and submissions.

Screenshot of LeetCode API response using GraphQL

Pros of GraphQL:

  • Efficient Data Fetching: It eliminates over-fetching and under-fetching, making it ideal for applications with complex data needs or for clients on slow networks, like mobile apps.

  • Improved Developer Experience: Frontend developers can request the data they need without waiting for the backend team to create new endpoints. The schema provides excellent self-documentation.

  • Evolving APIs without Versioning: You can add new fields to your schema without breaking existing clients. Clients only get the data they explicitly ask for, so new additions are ignored unless requested. You can deprecate old fields and use tooling to see which clients are still using them.

Cons of GraphQL:

  • Complexity: GraphQL is more complex to set up and maintain than REST. You need to manage a schema, write resolvers (the functions that fetch the data for each field), and understand concepts like mutations (for writing data) and subscriptions (for real-time updates).

  • Caching Challenges: Since most GraphQL queries are sent as POST requests to a single endpoint, you cannot leverage standard HTTP caching. Client-side caching libraries like Apollo Client or Relay are powerful but add their own layer of complexity.

  • Potential for Abusive Queries: A malicious or poorly written client query could ask for many levels of nested data, triggering a massive database load. You need to implement safeguards like query depth limiting, timeouts, or cost analysis to prevent this.

Example: GraphQL with Strawberry

pip install "fastapi[standard]" "strawberry-graphql[fastapi]"
import strawberry
from fastapi import FastAPI
from strawberry.fastapi import GraphQLRouter

@strawberry.type
class User:
    id: str
    name: str
    email: str

users = {
    "1": User(id="1", name="Anas Khan", email="anas@example.com"),
    "2": User(id="2", name="Ninad Naik", email="ninad@example.com"),
}

@strawberry.type
class Query:
    @strawberry.field
    def user(self, id: str) -> User | None:
        return users.get(id)

schema = strawberry.Schema(query=Query)

graphql_app = GraphQLRouter(schema)

app = FastAPI()
app.include_router(graphql_app, prefix="/graphql")

Run:

fastapi dev main.py

Test: Query:

{
  user(id: "2") {
    name
    email
  }
}

Send the query using curl:

curl -X POST \
  -H "Content-Type: application/json" \
  -d '{"query": "{ user(id: \"2\") { name email } }"}' \
  http://127.0.0.1:8000/graphql

Expected Output:

{
  "data": {
    "user": {
      "name": "Ninad Naik",
      "email": "ninad@example.com"
    }
  }
}

gRPC

gRPC stands for Google Remote Procedure Call. It is a modern, open-source framework that takes a different approach. While REST is about resources and GraphQL is about queries, gRPC is all about actions. It is based on the idea of a Remote Procedure Call (RPC), where a client can directly call a method on a server application on a different machine as if it were a local object.

With gRPC, you define your services and messages in a .proto file using Protocol Buffers (Protobuf). Protobuf is Google's language-neutral, platform-neutral mechanism for serializing structured data. Think of it like JSON, but smaller, faster, and it generates native code for you.

Here is a simple Protobuf definition:

syntax = "proto3";

service UserService {
  rpc GetUser(UserRequest) returns (UserResponse);
}

message UserRequest {
  string id = 1;
}

message UserResponse {
  string name = 1;
  string email = 2;
}

You use a Protobuf compiler (protoc) to generate client and server code in your language of choice (e.g., Python, Go, Java). The client then gets a "stub" that has the same methods as the server. When the client calls GetUser(), gRPC handles serializing the request into a compact binary format, sending it to the server, and deserializing the binary response.

gRPC is built on HTTP/2, which is a major upgrade over the HTTP/1.1 that most REST APIs use. HTTP/2 brings features like:

  • Multiplexing: Sending multiple requests and responses over a single TCP connection, reducing latency.

  • Bi-directional Streaming: The client and server can send a stream of messages to each other independently and simultaneously. This is perfect for real-time applications.

Pros of gRPC:

  • Bandwidth and Efficiency: The first and biggest advantage of gRPC is its use of Protocol Buffers, which are binary and much smaller than JSON. On average, gRPC reduces payload sizes by 60–80% compared to traditional REST-based JSON APIs. It also minimizes network overhead through HTTP/2 multiplexing, which allows multiple requests and responses to share a single connection without the need for repeated handshakes. In large-scale microservice systems with continuous internal communication, these optimizations can save hundreds of megabytes per second. Because of this, gRPC is particularly well suited for performance-critical workloads such as financial trading platforms, IoT telemetry systems, and real-time streaming applications where low latency and high throughput are essential.

  • Microservice Communication: This is gRPC's sweet spot. For internal, server-to-server communication where performance is critical, gRPC is unmatched. The low latency and high throughput make it ideal.

  • Streaming: If you need real-time, bi-directional streaming for things like chat applications, live data feeds, or IoT devices, gRPC provides this out of the box.

Cons of gRPC:

  • Limited Browser Support: You cannot call a gRPC service directly from a web browser. The browser does not give you the fine-grained control over HTTP/2 requests that gRPC requires. You need a proxy layer like gRPC-Web to translate requests.

  • Less Human-Readable: The payload is a binary format. You cannot just use curl to inspect a request or response easily like you can with JSON. Debugging can be more challenging without the right tools.

  • Steeper Learning Curve: It requires you to learn Protobuf, understand the code generation process, and work with a more rigid, contract-first approach.

Example: gRPC in Python

pip install grpcio grpcio-tools
python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. user.proto

user.proto

syntax = "proto3";

service UserService {
  rpc GetUser(UserRequest) returns (UserResponse);
}

message UserRequest {
  string id = 1;
}

message UserResponse {
  string name = 1;
  string email = 2;
}

server.py

import grpc
from concurrent import futures
import user_pb2
import user_pb2_grpc

users = {
    "1": {"name": "Anas Khan", "email": "anas@example.com"},
    "2": {"name": "Ninad Naik", "email": "ninad@example.com"}
}

class UserService(user_pb2_grpc.UserServiceServicer):
    def GetUser(self, request, context):
        user = users.get(request.id)
        if user:
            return user_pb2.UserResponse(name=user["name"], email=user["email"])
        context.abort(grpc.StatusCode.NOT_FOUND, "User not found")

def serve():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    user_pb2_grpc.add_UserServiceServicer_to_server(UserService(), server)
    server.add_insecure_port('[::]:50051')
    print("Server running on port 50051")
    server.start()
    server.wait_for_termination()

if __name__ == "__main__":
    serve()

client.py

import grpc
import user_pb2
import user_pb2_grpc

def run():
    with grpc.insecure_channel('localhost:50051') as channel:
        stub = user_pb2_grpc.UserServiceStub(channel)

        response = stub.GetUser(user_pb2.UserRequest(id="1"))
        print(f"User: {response.name}, Email: {response.email}")

        response = stub.GetUser(user_pb2.UserRequest(id="2"))
        print(f"User: {response.name}, Email: {response.email}")

if __name__ == "__main__":
    run()

Run Server (terminal 1):

python server.py

Run Client (terminal 2):

python client.py

Output:

User: Anas Khan, Email: anas@example.com
User: Ninad Naik, Email: ninad@example.com

A Quick Note on tRPC

You might also hear about tRPC (TypeScript Remote Procedure Call). It is important to understand this is not a general-purpose paradigm like the others. tRPC is designed specifically for full-stack TypeScript applications, often in a monorepo. It allows you to share types between your backend and frontend, giving you end-to-end type safety. Your frontend can call backend functions as if they were local, with full auto-completion and type checking. It is extremely lightweight and offers an amazing developer experience, but it tightly couples your client and server and is only for the TypeScript ecosystem.

Mental Model: Ordering Food

To solidify these concepts, let's use an analogy: ordering food at a restaurant.

  • REST is like ordering from a fixed menu. You go to the counter and say "I want the number 5 combo." You get exactly what the menu describes for the number 5 combo: a burger, fries, and a drink. If you do not want the drink (over-fetching), too bad, it comes with the combo. If you want an extra side of onion rings (under-fetching), you have to make a separate order.

  • GraphQL is like a custom buffet where you give the chef a specific list. You go to a single counter and hand them a detailed note: "I want a burger patty, two slices of cheddar cheese, a toasted bun, a handful of lettuce, and three onion rings." The chef gets you exactly that, all in one trip. You get precisely what you need, efficiently.

  • gRPC is like the high-speed, internal communication system between the restaurant's kitchens. It is not for customers. The kitchens use a special, highly efficient shorthand (Protobuf) and a pneumatic tube system (HTTP/2) to send precisely measured ingredients (binary data) to each other instantly. It is all about speed and efficiency for internal operations.

Step-by-Step: How to Choose Your Paradigm

When starting a new project, ask yourself these questions to guide your decision.

1. Who are the primary consumers of this API?

  • Public / Third-Party Developers: Lean towards REST. Its simplicity, standardization, and vast ecosystem make it the easiest for external developers to adopt. The barrier to entry is low.

  • Your Own Frontend Teams (Web & Mobile): GraphQL is a very strong contender. It decouples the frontend from the backend, allowing them to iterate faster. The efficiency gains are especially valuable for mobile clients.

  • Internal Microservices: gRPC is the default choice here. The performance, strict contracts, and streaming capabilities are perfect for server-to-server communication.

  • A Tightly Coupled Full-Stack TypeScript App: Consider tRPC for an unbeatable developer experience and end-to-end type safety.

2. What does your data look like?

  • Simple, well-defined resources (CRUD-heavy): REST is a natural fit. Its resource-based model works perfectly for this.

  • Complex, nested, or graph-like data: GraphQL excels. A social media feed with users, posts, comments, and likes is a classic use case.

  • Actions and Commands: If your API is more about performing actions (archiveUser, calculateRisk) than managing resources, an RPC-style API like gRPC feels more natural.

3. What are your performance and network requirements?

  • Standard Web Traffic: REST and GraphQL are generally sufficient.

  • Critical Low Latency & High Throughput: gRPC is the clear winner. For applications like financial trading systems or real-time multiplayer game backends, every microsecond counts.

  • Bandwidth Constrained Clients (Mobile/IoT): GraphQL provides fine-grained control to minimize data transfer. gRPC is also extremely efficient due to its binary format.

Trade-offs and Considerations

FeatureRESTGraphQLgRPC
ParadigmArchitectural Style (Resources)Query LanguageRPC Framework (Actions)
Data FormatJSON (most common), XML, etc.JSONProtocol Buffers (Binary)
TransportHTTP/1.1, HTTP/2HTTP/1.1, HTTP/2HTTP/2
PerformanceGood, but prone to payload issuesExcellent for data fetching efficiencyHighest performance, lowest latency
CachingExcellent via standard HTTP cachingComplex, requires client-side librariesNot supported by default, handled at app level
Schema/ContractOptional (OpenAPI)Required (Strongly Typed Schema)Required (Strict .proto contract)
Developer ExperienceLow learning curve, easy to startFrontend freedom, powerful toolingContract-first, excellent for polyglot systems
Best Use CasePublic APIs, simple resource servicesMobile/Web frontends, complex data modelsInternal microservices, streaming, performance

Key Takeaways

There is no "best" API paradigm. The right choice depends entirely on the problem you are solving.

As a developers, our job is to understand the trade-offs. Analyze your requirements and then pick the right tool. Building scalable systems starts with making these foundational architectural decisions correctly.