Microservices Communication: gRPC vs. REST

Microservices Communication: gRPC vs. REST
Microservices need to communicate. REST over HTTP/1.1 with JSON is the default choice: familiar, browser-compatible, and human-readable. gRPC over HTTP/2 with Protocol Buffers is the high-performance alternative: strongly typed, faster serialization, streaming support, and generated clients in any language. Choosing between them depends on your latency requirements, team expertise, and whether external browsers need to call your APIs.
The Core Difference
REST + JSON:
Client → HTTP POST /orders
Body: {"userId": "abc", "items": [{"id": "prod-1", "qty": 2}]}
Human-readable, easy to debug with curl
JSON parsing at every service boundary
No schema enforcement without extra tooling (OpenAPI, Zod, etc.)
gRPC + Protobuf:
Client → binary RPC CreateOrder(CreateOrderRequest{userId: "abc", items: [...]})
Machine-optimized, requires tooling to inspect
Binary encoding: ~3-10× smaller payloads, ~5-10× faster parsing
Schema enforced at compile time (protobuf generates typed client/server code)Protocol Buffers: The Schema
gRPC services are defined in .proto files. These generate type-safe client and server code in any supported language.
// order.proto
syntax = "proto3";
package order;
option java_package = "com.company.order";
option java_multiple_files = true;
// Enums
enum OrderStatus {
ORDER_STATUS_UNSPECIFIED = 0; // proto3: first value must be 0
ORDER_STATUS_PENDING = 1;
ORDER_STATUS_CONFIRMED = 2;
ORDER_STATUS_SHIPPED = 3;
ORDER_STATUS_CANCELLED = 4;
}
// Messages
message OrderItem {
string product_id = 1;
int32 quantity = 2;
double unit_price = 3;
}
message CreateOrderRequest {
string user_id = 1;
repeated OrderItem items = 2;
}
message CreateOrderResponse {
string order_id = 1;
OrderStatus status = 2;
double total = 3;
}
message GetOrderRequest {
string order_id = 1;
}
message ListOrdersRequest {
string user_id = 1;
int32 page_size = 2; // pagination
string page_token = 3;
}
message ListOrdersResponse {
repeated CreateOrderResponse orders = 1;
string next_page_token = 2;
}
// Service definition
service OrderService {
// Unary RPC (request-response, like REST)
rpc CreateOrder(CreateOrderRequest) returns (CreateOrderResponse);
rpc GetOrder(GetOrderRequest) returns (CreateOrderResponse);
// Server streaming: client sends one request, server sends multiple responses
rpc ListOrders(ListOrdersRequest) returns (stream CreateOrderResponse);
// Client streaming: client sends stream, server replies once
rpc BatchCreateOrders(stream CreateOrderRequest) returns (CreateOrderResponse);
// Bidirectional streaming
rpc OrderUpdates(stream GetOrderRequest) returns (stream CreateOrderResponse);
}Generate code:
# Install protoc and gRPC plugins
# Then generate:
protoc --java_out=src/main/java \
--grpc-java_out=src/main/java \
--proto_path=src/main/proto \
src/main/proto/order.protogRPC Server (Java)
// OrderServiceImpl.java
import io.grpc.Status;
import io.grpc.stub.StreamObserver;
public class OrderServiceImpl extends OrderServiceGrpc.OrderServiceImplBase {
private final OrderRepository orderRepository;
@Override
public void createOrder(
CreateOrderRequest request,
StreamObserver<CreateOrderResponse> responseObserver
) {
try {
// Validate
if (request.getUserId().isEmpty()) {
responseObserver.onError(
Status.INVALID_ARGUMENT
.withDescription("userId is required")
.asRuntimeException()
);
return;
}
// Process
Order order = orderService.create(
request.getUserId(),
request.getItemsList().stream()
.map(i -> new OrderItem(i.getProductId(), i.getQuantity(), i.getUnitPrice()))
.toList()
);
// Respond
CreateOrderResponse response = CreateOrderResponse.newBuilder()
.setOrderId(order.getId())
.setStatus(OrderStatus.ORDER_STATUS_CONFIRMED)
.setTotal(order.getTotal())
.build();
responseObserver.onNext(response);
responseObserver.onCompleted();
} catch (Exception e) {
responseObserver.onError(
Status.INTERNAL.withDescription(e.getMessage()).asRuntimeException()
);
}
}
// Server streaming: send orders one by one
@Override
public void listOrders(
ListOrdersRequest request,
StreamObserver<CreateOrderResponse> responseObserver
) {
try {
// Stream results from DB without loading all into memory
orderRepository.streamByUserId(request.getUserId(), order -> {
responseObserver.onNext(toProto(order));
});
responseObserver.onCompleted();
} catch (Exception e) {
responseObserver.onError(Status.INTERNAL.withCause(e).asRuntimeException());
}
}
}
// Start server
Server server = ServerBuilder.forPort(50051)
.addService(new OrderServiceImpl(orderRepository))
.addService(ProtoReflectionService.newInstance()) // enables grpcurl, etc.
.intercept(new LoggingInterceptor())
.build()
.start();gRPC Client (Java)
// Blocking stub (synchronous, use in non-reactive code)
ManagedChannel channel = ManagedChannelBuilder
.forAddress("order-service", 50051)
.usePlaintext() // use .useTransportSecurity() in production
.build();
OrderServiceGrpc.OrderServiceBlockingStub blockingStub =
OrderServiceGrpc.newBlockingStub(channel)
.withDeadlineAfter(5, TimeUnit.SECONDS);
CreateOrderResponse response = blockingStub.createOrder(
CreateOrderRequest.newBuilder()
.setUserId("user-1")
.addItems(OrderItem.newBuilder()
.setProductId("prod-1")
.setQuantity(2)
.setUnitPrice(29.99)
.build())
.build()
);
// Async stub (non-blocking)
OrderServiceGrpc.OrderServiceStub asyncStub = OrderServiceGrpc.newStub(channel);
// Server streaming
asyncStub.listOrders(
ListOrdersRequest.newBuilder().setUserId("user-1").setPageSize(50).build(),
new StreamObserver<>() {
@Override public void onNext(CreateOrderResponse order) {
process(order); // called for each order as it arrives
}
@Override public void onError(Throwable t) { log.error("Stream error", t); }
@Override public void onCompleted() { log.info("Stream complete"); }
}
);REST Implementation for the Same API
// REST equivalent in Express/TypeScript
// POST /orders
app.post('/orders', async (req, res) => {
const { userId, items } = req.body;
if (!userId) return res.status(400).json({ error: 'userId is required' });
const order = await orderService.create(userId, items);
res.status(201).json({
orderId: order.id,
status: 'CONFIRMED',
total: order.total,
});
});
// GET /orders?userId=abc&pageSize=50
app.get('/orders', async (req, res) => {
const { userId, pageSize = 20, pageToken } = req.query;
const { orders, nextPageToken } = await orderService.list(userId, pageSize, pageToken);
res.json({ orders, nextPageToken });
});Comparison Table
| Feature | REST + JSON | gRPC + Protobuf |
|---|---|---|
| Protocol | HTTP/1.1 or HTTP/2 | HTTP/2 (required) |
| Payload format | JSON (text) | Protobuf (binary) |
| Payload size | Baseline | 3–10× smaller |
| Parse speed | Baseline | 5–10× faster |
| Schema enforcement | Optional (OpenAPI, Zod) | Mandatory (proto file) |
| Streaming | SSE / WebSocket (add-on) | Native (4 streaming modes) |
| Browser support | Native | Requires grpc-web proxy |
| Code generation | Optional (OpenAPI generators) | Built-in (protoc) |
| Human readability | High | Low (binary) |
| Debugging tools | curl, Postman, browser | grpcurl, grpc-ui |
| Error model | HTTP status codes | Status codes + rich metadata |
| Language support | All | Most mainstream languages |
| Learning curve | Low | Medium |
When to Use Each
Use REST when:
- External clients (browsers, mobile apps, third-party integrations) consume the API
- Your team is not familiar with Protocol Buffers
- You need human-readable payloads for easy debugging
- Payload size and parse performance are not bottlenecks
- You're building a public API (documentation, SDK generation)
Use gRPC when:
- Service-to-service communication with no browser clients
- Latency-sensitive internal APIs (payment processing, real-time systems)
- You need streaming (real-time updates, large dataset transfer)
- Multiple languages in your microservices (gRPC generates clients for all)
- Payload size matters at scale (mobile battery/data, high-frequency calls)
Hybrid approach (most teams):
- gRPC for internal microservice-to-microservice communication
- REST for external-facing APIs (customer-facing, partner integrations)
- grpc-web or a gateway (Envoy, Kong) to translate gRPC to REST at the edge
Service Discovery and Load Balancing
// gRPC with client-side load balancing
ManagedChannel channel = ManagedChannelBuilder
.forTarget("dns:///order-service.production.svc.cluster.local:50051")
.defaultLoadBalancingPolicy("round_robin")
.usePlaintext()
.build();
// In Kubernetes: services are discoverable by DNS name
// Pod IPs change — use headless services for gRPC (direct pod routing)
// Regular K8s services work for REST (L4 load balancing is fine)# Kubernetes headless service for gRPC (bypasses kube-proxy)
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
clusterIP: None # ↠headless: DNS returns all pod IPs
selector:
app: order-service
ports:
- port: 50051
name: grpcgRPC Interceptors (Middleware)
// Server-side interceptor for auth, logging, metrics
public class AuthInterceptor implements ServerInterceptor {
@Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(
ServerCall<ReqT, RespT> call,
Metadata headers,
ServerCallHandler<ReqT, RespT> next
) {
String token = headers.get(Metadata.Key.of("authorization", Metadata.ASCII_STRING_MARSHALLER));
if (token == null || !tokenValidator.isValid(token)) {
call.close(Status.UNAUTHENTICATED.withDescription("Invalid token"), new Metadata());
return new ServerCall.Listener<>() {};
}
return next.startCall(call, headers);
}
}Frequently Asked Questions
Q: Can I use gRPC from a browser without any proxy?
Not directly. The browser's Fetch API and XMLHttpRequest don't support HTTP/2 trailers, which gRPC requires for status codes. You need grpc-web (a JavaScript library) paired with a proxy like Envoy or Nginx that translates between grpc-web and native gRPC. Alternatively, use Connect (a newer protocol from Buf) which is compatible with both REST clients and native gRPC, eliminating the proxy requirement.
Q: How do gRPC errors map to HTTP status codes?
gRPC has its own status codes (OK, INVALID_ARGUMENT, NOT_FOUND, UNAUTHENTICATED, PERMISSION_DENIED, INTERNAL, UNAVAILABLE, etc.). When exposed through a REST gateway, they map approximately to: INVALID_ARGUMENT → 400, NOT_FOUND → 404, UNAUTHENTICATED → 401, PERMISSION_DENIED → 403, INTERNAL → 500, UNAVAILABLE → 503. The gRPC error model also supports Status.Details—you can attach structured error metadata (field violations, retry info) beyond what HTTP status codes can express.
Q: Is REST with HTTP/2 and binary formats (MessagePack) comparable to gRPC performance?
REST over HTTP/2 with efficient binary serialization can approach gRPC performance for latency and throughput. The remaining advantages of gRPC are: generated type-safe clients (huge DX win), standardized streaming primitives, and the ecosystem (health checking, reflection, service mesh integration). If you're already on HTTP/2 and using OpenAPI code generation, switching to gRPC may not justify the migration cost for an existing system.
Q: How does gRPC handle backward compatibility when the proto schema changes?
Protocol Buffers has strong backward and forward compatibility guarantees if you follow the rules: never reuse field numbers, use optional for new fields (proto3 treats all scalar fields as optional), and never change a field's type. Old clients receiving a message with new fields will ignore unknown fields. New clients receiving messages from old servers will get zero values for missing fields. This makes rolling deployments safe—you can deploy the new server version while old clients are still running.
