1. 首页 > 软件下载 > 正文

gRPC下载-Google gRPC框架下载地址-gRPC下载方法

1. gRPC简介与版本说明

gRPC是Google开源的高性能、通用的RPC框架,基于HTTP/2协议传输,使用Protocol Buffers作为接口定义语言和序列化协议。gRPC支持多种编程语言,包括C++、Java、Python、Go、Ruby、C#、Node.js、Android Java、Objective-C、PHP等。更多学习教程www.fgedu.net.cn。

gRPC具有高性能、低延迟、双向流传输等特性,广泛应用于微服务架构、移动应用通信、分布式系统等领域。学习交流加群风哥微信: itpux-com。gRPC支持四种通信模式:简单RPC、服务端流RPC、客户端流RPC、双向流RPC。

gRPC核心特性:

– 高性能通信:基于HTTP/2协议,支持多路复用、头部压缩
– 强类型接口:使用Protocol Buffers定义服务接口
– 多语言支持:支持12+编程语言的原生实现
– 四种通信模式:简单RPC、服务端流、客户端流、双向流
– 双向流传输:支持客户端和服务端双向实时通信
– 流量控制:内置流量控制和拥塞控制机制
– 安全认证:支持TLS/SSL加密和多种认证机制
– 负载均衡:支持客户端负载均衡和服务端负载均衡

gRPC核心概念:

– Protocol Buffers:Google的序列化协议,用于定义服务和消息
– Service:服务定义,包含一组RPC方法
– Message:消息定义,用于请求和响应的数据结构
– Channel:gRPC通道,与服务端的连接
– Stub:客户端存根,用于调用远程方法
– Server:gRPC服务端,监听端口并处理请求
– Interceptor:拦截器,用于请求和响应的处理链

2. gRPC版本选择与下载地址

gRPC提供多个版本供用户选择,生产环境建议使用稳定版本。

gRPC版本状态:

版本系列 状态 最新版本 发布日期 说明
1.80.x Latest 1.80.0 2026-03-30 最新版本
1.78.x Stable 1.78.0 2026-03-18 稳定版本
1.70.x Stable 1.70.1 2025-XX-XX 长期支持版本
1.68.x Stable 1.68.2 2025-XX-XX 稳定版本

gRPC 1.80.x主要特性:

– 改进的性能和稳定性
– 支持新的压缩算法
– 增强的安全特性
– 改进的错误处理
– 新的语言绑定支持

官方下载地址:

gRPC官网:https://grpc.io/
GitHub仓库:https://github.com/grpc/grpc
发布页面:https://github.com/grpc/grpc/releases
文档中心:https://grpc.io/docs/

3. gRPC下载方式详解

方式一:Java Maven依赖引入

在pom.xml中添加gRPC依赖:


io.grpc
grpc-netty-shaded
1.80.0


io.grpc
grpc-protobuf
1.80.0


io.grpc
grpc-stub
1.80.0


com.google.protobuf
protobuf-java
4.29.3


javax.annotation
javax.annotation-api
1.3.2

添加protobuf编译插件:



kr.motd.maven
os-maven-plugin
1.7.1

org.xolstice.maven.plugins
protobuf-maven-plugin
0.6.1
com.google.protobuf:protoc:4.29.3:exe:${os.detected.classifier} grpc-java io.grpc:protoc-gen-grpc-java:1.80.0:exe:${os.detected.classifier}



compile
compile-custom


方式二:Python pip安装

安装gRPC核心库:
$ pip install grpcio

输出示例如下:
Collecting grpcio
Downloading grpcio-1.80.0-cp311-cp311-manylinux_2_17_x86_64.whl (58.2 MB)
Installing collected packages: grpcio
Successfully installed grpcio-1.80.0

安装gRPC工具(包含protoc编译器):
$ pip install grpcio-tools

输出示例如下:
Collecting grpcio-tools
Downloading grpcio_tools-1.80.0-cp311-cp311-manylinux_2_17_x86_64.whl (12.3 MB)
Installing collected packages: grpcio-tools
Successfully installed grpcio-tools-1.80.0

安装指定版本:
$ pip install grpcio==1.80.0
$ pip install grpcio-tools==1.80.0

方式三:Go语言安装

安装gRPC Go模块:
$ go get google.golang.org/grpc@v1.80.0

输出示例如下:
go: added google.golang.org/grpc v1.80.0

安装protoc编译器Go插件:
$ go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
$ go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest

输出示例如下:
go: downloading google.golang.org/protobuf v1.36.4
go: downloading google.golang.org/grpc v1.80.0

验证安装:
$ protoc-gen-go –version
protoc-gen-go v1.36.4

$ protoc-gen-go-grpc –version
protoc-gen-go-grpc 1.5.1

方式四:C++源码编译安装

克隆源码仓库:
$ git clone –recurse-submodules -b v1.80.0 –depth 1 –shallow-submodules https://github.com/grpc/grpc
$ cd grpc/

创建构建目录:
$ mkdir -p cmake/build
$ cd cmake/build

配置CMake:
$ cmake ../.. -DCMAKE_BUILD_TYPE=Release -DgRPC_INSTALL=ON -DgRPC_BUILD_TESTS=OFF

输出示例如下:
— The C compiler identification is GNU 11.4.0
— The CXX compiler identification is GNU 11.4.0
— Detecting C compiler ABI info
— Detecting C compiler ABI info – done

— Configuring done
— Generating done
— Build files have been written to: /fgeudb/grpc/cmake/build

编译安装:
$ make -j$(nproc)

输出示例如下:
[ 1%] Building C object CMakeFiles/address_sorting.dir/third_party/address_sorting/address_sorting.c.o
[ 2%] Building CXX object CMakeFiles/gpr.dir/src/core/lib/gpr/alloc.cc.o

[100%] Built target grpc++_reflection

$ sudo make install

输出示例如下:
Install the project…
— Install configuration: “Release”
— Installing: /usr/local/lib/libgrpc.so.1.80.0

方式五:Node.js安装

安装gRPC Node.js包:
$ npm install @grpc/grpc-js

输出示例如下:
added 2 packages in 3s

安装protoc编译器JS插件:
$ npm install -g grpc-tools

输出示例如下:
added 1 package in 2s

安装protobufjs:
$ npm install protobufjs

输出示例如下:
added 3 packages in 2s

验证安装:
$ grpc_tools_node_protoc –version
libprotoc 29.3

4. gRPC开发环境搭建实战

步骤1:安装Protocol Buffers编译器

下载protoc编译器:
$ cd /fgeudb/software
$ wget https://github.com/protocolbuffers/protobuf/releases/download/v29.3/protoc-29.3-linux-x86_64.zip

输出示例如下:
–2026-04-04 10:00:00– https://github.com/protocolbuffers/protobuf/releases/download/v29.3/protoc-29.3-linux-x86_64.zip
Resolving github.com (github.com)… 140.82.121.3
Connecting to github.com (github.com)|140.82.121.3|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 3456789 (3.3M) [application/zip]
Saving to: ‘protoc-29.3-linux-x86_64.zip’

protoc-29.3-linux-x86_64.zip 100%[======================================================================>] 3.30M 5.2MB/s in 0.6s

解压安装:
$ unzip protoc-29.3-linux-x86_64.zip -d /usr/local/
$ chmod +x /usr/local/bin/protoc

验证安装:
$ protoc –version

输出示例如下:
libprotoc 29.3

步骤2:定义Proto文件

创建proto文件:
$ mkdir -p /fgeudb/grpc/proto
$ vi /fgeudb/grpc/proto/demo.proto

syntax = “proto3”;

package demo;

option java_multiple_files = true;
option java_package = “com.fgedu.grpc.demo”;
option java_outer_classname = “DemoProto”;
option go_package = “fgedu.net.cn/grpc/demo”;

service DemoService {
rpc SayHello (HelloRequest) returns (HelloReply) {}
rpc SayHelloStream (HelloRequest) returns (stream HelloReply) {}
rpc SayHelloClientStream (stream HelloRequest) returns (HelloReply) {}
rpc SayHelloBidiStream (stream HelloRequest) returns (stream HelloReply) {}
}

message HelloRequest {
string name = 1;
int32 age = 2;
}

message HelloReply {
string message = 1;
int64 timestamp = 2;
}

步骤3:编译Proto文件(Java)

使用Maven编译:
$ cd /fgeudb/grpc/java-demo
$ mvn clean compile

输出示例如下:
[INFO] Scanning for projects…
[INFO] Building grpc-java-demo 1.0.0
[INFO] ————————————————————————
[INFO] Generating /fgeudb/grpc/java-demo/target/generated-sources/protobuf/java/com/fgedu/grpc/demo/DemoProto.java
[INFO] Generating /fgeu/grpc/java-demo/target/generated-sources/protobuf/grpc-java/com/fgedu/grpc/demo/DemoServiceGrpc.java
[INFO] BUILD SUCCESS

手动编译:
$ protoc –java_out=src/main/java \
–grpc-java_out=src/main/java \
–plugin=protoc-gen-grpc-java=/usr/local/bin/protoc-gen-grpc-java \
proto/demo.proto

步骤4:编译Proto文件(Python)

使用grpc_tools编译:
$ python -m grpc_tools.protoc \
-I./proto \
–python_out=./generated \
–grpc_python_out=./generated \
./proto/demo.proto

输出示例如下:
生成文件:
– demo_pb2.py
– demo_pb2_grpc.py

查看生成的文件:
$ ls -la generated/

输出示例如下:
total 16
drwxr-xr-x 2 root root 4096 Apr 4 10:05 .
drwxr-xr-x 3 root root 4096 Apr 4 10:05 ..
-rw-r–r– 1 root root 5678 Apr 4 10:05 demo_pb2.py
-rw-r–r– 1 root root 3456 Apr 4 10:05 demo_pb2_grpc.py

步骤5:编译Proto文件(Go)

使用protoc编译Go代码:
$ protoc –go_out=./generated \
–go_opt=paths=source_relative \
–go-grpc_out=./generated \
–go-grpc_opt=paths=source_relative \
proto/demo.proto

输出示例如下:
生成文件:
– demo.pb.go
– demo_grpc.pb.go

查看生成的文件:
$ ls -la generated/

输出示例如下:
total 24
drwxr-xr-x 2 root root 4096 Apr 4 10:05 .
drwxr-xr-x 3 root root 4096 Apr 4 10:05 ..
-rw-r–r– 1 root root 12345 Apr 4 10:05 demo.pb.go
-rw-r–r– 1 root root 8765 Apr 4 10:05 demo_grpc.pb.go

5. gRPC生产环境配置优化

步骤1:配置服务端参数

Java服务端配置:
import io.grpc.ServerBuilder;

Server server = ServerBuilder.forPort(9090)
.addService(new DemoServiceImpl())
.maxInboundMessageSize(16 * 1024 * 1024)
.maxInboundMetadataSize(1 * 1024 * 1024)
.keepAliveTime(30, TimeUnit.SECONDS)
.keepAliveTimeout(10, TimeUnit.SECONDS)
.permitKeepAliveTime(10, TimeUnit.SECONDS)
.permitKeepAliveWithoutCalls(true)
.build();

参数说明:
– maxInboundMessageSize:最大入站消息大小(默认4MB)
– maxInboundMetadataSize:最大入站元数据大小(默认8KB)
– keepAliveTime:Keep-Alive时间间隔
– keepAliveTimeout:Keep-Alive超时时间
– permitKeepAliveTime:允许的Keep-Alive最小间隔
– permitKeepAliveWithoutCalls:是否允许无调用时发送Keep-Alive

步骤2:配置客户端参数

Java客户端配置:
import io.grpc.ManagedChannelBuilder;

ManagedChannel channel = ManagedChannelBuilder.forAddress(“192.168.1.51”, 9090)
.usePlaintext()
.maxInboundMessageSize(16 * 1024 * 1024)
.keepAliveTime(30, TimeUnit.SECONDS)
.keepAliveTimeout(10, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.idleTimeout(300, TimeUnit.SECONDS)
.enableRetry()
.maxRetryAttempts(3)
.build();

参数说明:
– maxInboundMessageSize:最大入站消息大小
– keepAliveTime:Keep-Alive时间间隔
– keepAliveTimeout:Keep-Alive超时时间
– keepAliveWithoutCalls:无调用时是否发送Keep-Alive
– idleTimeout:空闲超时时间
– enableRetry:启用重试机制
– maxRetryAttempts:最大重试次数

步骤3:配置线程池

Java服务端线程池配置:
import java.util.concurrent.Executors;
import io.grpc.ServerBuilder;

ExecutorService executor = Executors.newFixedThreadPool(
Runtime.getRuntime().availableProcessors() * 2,
new ThreadFactoryBuilder().setNameFormat(“grpc-server-%d”).build()
);

Server server = ServerBuilder.forPort(9090)
.addService(new DemoServiceImpl())
.executor(executor)
.build();

Python服务端线程池配置:
from concurrent import futures
import grpc

server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
server.add_insecure_port(‘[::]:9090’)

步骤4:配置拦截器

Java服务端拦截器:
import io.grpc.ServerInterceptor;
import io.grpc.Metadata;

public class LoggingServerInterceptor implements ServerInterceptor {
@Override
public ServerCall.Listener interceptCall(
ServerCall call,
Metadata headers,
ServerCallHandler next) {
System.out.println(“Received call: ” + call.getMethodDescriptor().getFullMethodName());
return next.startCall(call, headers);
}
}

注册拦截器:
Server server = ServerBuilder.forPort(9090)
.addService(ServerInterceptors.intercept(
new DemoServiceImpl(),
new LoggingServerInterceptor()
))
.build();

Java客户端拦截器:
import io.grpc.ClientInterceptor;

public class LoggingClientInterceptor implements ClientInterceptor {
@Override
public ClientCall interceptCall(
MethodDescriptor method,
CallOptions callOptions,
Channel next) {
System.out.println(“Calling: ” + method.getFullMethodName());
return next.newCall(method, callOptions);
}
}

注册拦截器:
ManagedChannel channel = ManagedChannelBuilder.forAddress(“192.168.1.51”, 9090)
.intercept(new LoggingClientInterceptor())
.build();

6. gRPC安全配置

步骤1:配置TLS/SSL加密

生成SSL证书:
$ openssl genrsa -out server.key 2048
$ openssl req -new -key server.key -out server.csr \
-subj “/C=CN/ST=Beijing/L=Beijing/O=/CN=fgedu.net.cn”
$ openssl x509 -req -in server.csr -signkey server.key -out server.crt -days 365

输出示例如下:
Signature ok
subject=C=CN, ST=Beijing, L=Beijing, O=, CN=fgedu.net.cn
Getting Private key

Java服务端TLS配置:
import io.grpc.ServerBuilder;
import io.grpc.netty.shaded.io.netty.handler.ssl.SslContextBuilder;

SslContextBuilder sslContextBuilder = SslContextBuilder.forServer(
new File(“/fgeudb/grpc/certs/server.crt”),
new File(“/fgeudb/grpc/certs/server.key”)
);

Server server = ServerBuilder.forPort(9090)
.useTransportSecurity(
new File(“/fgeudb/grpc/certs/server.crt”),
new File(“/fgeudb/grpc/certs/server.key”)
)
.addService(new DemoServiceImpl())
.build();

Java客户端TLS配置:
import io.grpc.ManagedChannelBuilder;

ManagedChannel channel = ManagedChannelBuilder.forAddress(“192.168.1.51”, 9090)
.useTransportSecurity()
.build();

步骤2:配置认证机制

Token认证拦截器:
import io.grpc.*;

public class AuthServerInterceptor implements ServerInterceptor {
private static final Metadata.Key AUTH_TOKEN_KEY =
Metadata.Key.of(“authorization”, Metadata.ASCII_STRING_MARSHALLER);

@Override
public ServerCall.Listener interceptCall(
ServerCall call,
Metadata headers,
ServerCallHandler next) {
String token = headers.get(AUTH_TOKEN_KEY);
if (token == null || !validateToken(token)) {
call.close(Status.UNAUTHENTICATED.withDescription(“Invalid token”), headers);
return new ServerCall.Listener() {};
}
return next.startCall(call, headers);
}

private boolean validateToken(String token) {
return “valid-token”.equals(token);
}
}

客户端添加Token:
import io.grpc.*;

public class AuthClientInterceptor implements ClientInterceptor {
private static final Metadata.Key AUTH_TOKEN_KEY =
Metadata.Key.of(“authorization”, Metadata.ASCII_STRING_MARSHALLER);

@Override
public ClientCall interceptCall(
MethodDescriptor method,
CallOptions callOptions,
Channel next) {
return new ForwardingClientCall.SimpleForwardingClientCall(
next.newCall(method, callOptions)) {
@Override
public void start(Listener responseListener, Metadata headers) {
headers.put(AUTH_TOKEN_KEY, “valid-token”);
super.start(responseListener, headers);
}
};
}
}

7. gRPC负载均衡配置

客户端负载均衡

Java客户端负载均衡:
import io.grpc.ManagedChannelBuilder;
import io.grpc.LoadBalancerRegistry;
import io.grpc.util.RoundRobinLoadBalancerProvider;

ManagedChannel channel = ManagedChannelBuilder.forTarget(“dns:///demo-service”)
.defaultLoadBalancingPolicy(“round_robin”)
.enableRetry()
.maxRetryAttempts(3)
.build();

负载均衡策略:
– round_robin:轮询(默认)
– pick_first:选择第一个可用的服务器
– weighted_round_robin:加权轮询
– least_request:最少请求

自定义负载均衡策略:
import io.grpc.LoadBalancer;
import io.grpc.LoadBalancerProvider;

public class CustomLoadBalancerProvider extends LoadBalancerProvider {
@Override
public LoadBalancer newLoadBalancer(LoadBalancer.Helper helper) {
return new CustomLoadBalancer(helper);
}

@Override
public boolean isAvailable() {
return true;
}

@Override
public int getPriority() {
return 5;
}

@Override
public String getPolicyName() {
return “custom_policy”;
}
}

服务发现集成

与Nacos集成:
import io.grpc.NameResolver;
import io.grpc.NameResolverProvider;

public class NacosNameResolverProvider extends NameResolverProvider {
@Override
public NameResolver newNameResolver(URI targetUri, NameResolver.Args args) {
return new NacosNameResolver(targetUri, args);
}

@Override
protected boolean isAvailable() {
return true;
}

@Override
protected int priority() {
return 5;
}

@Override
public String getDefaultScheme() {
return “nacos”;
}
}

使用Nacos服务发现:
ManagedChannel channel = ManagedChannelBuilder.forTarget(“nacos:///demo-service”)
.nameResolverRegistry(NameResolverRegistry.getDefaultRegistry())
.defaultLoadBalancingPolicy(“round_robin”)
.build();

8. 安装验证与测试

启动gRPC服务端

Java服务端启动:
$ java -jar grpc-server.jar

输出示例如下:
2026-04-04 10:10:00.000 INFO [main] gRPC Server started on port 9090
2026-04-04 10:10:00.000 INFO [main] Server is ready to accept requests

Python服务端启动:
$ python server.py

输出示例如下:
Server started on port 9090
Waiting for requests…

Go服务端启动:
$ go run server.go

输出示例如下:
2026/04/04 10:10:00 Server listening on :9090

使用grpcurl测试

安装grpcurl:
$ go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest

查看服务列表:
$ grpcurl -plaintext localhost:9090 list

输出示例如下:
demo.DemoService

查看服务方法:
$ grpcurl -plaintext localhost:9090 list demo.DemoService

输出示例如下:
SayHello
SayHelloStream
SayHelloClientStream
SayHelloBidiStream

查看方法描述:
$ grpcurl -plaintext localhost:9090 describe demo.DemoService.SayHello

输出示例如下:
demo.DemoService.SayHello is a method:
rpc SayHello ( .demo.HelloRequest ) returns ( .demo.HelloReply );

调用方法:
$ grpcurl -plaintext -d ‘{“name”: “fengge”, “age”: 30}’ localhost:9090 demo.DemoService/SayHello

输出示例如下:
{
“message”: “Hello fengge, you are 30 years old!”,
“timestamp”: “1712217600000”
}

使用grpc_health_probe健康检查

安装grpc_health_probe:
$ go install github.com/grpc-ecosystem/grpc-health-probe@latest

健康检查:
$ grpc_health_probe -addr=localhost:9090

输出示例如下:
status: SERVING

配置Kubernetes健康检查:
livenessProbe:
exec:
command: [“/bin/grpc_health_probe”, “-addr=:9090”]
initialDelaySeconds: 5
readinessProbe:
exec:
command: [“/bin/grpc_health_probe”, “-addr=:9090”]
initialDelaySeconds: 5

9. 多语言客户端开发示例

Java客户端示例

import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import com.fgedu.grpc.demo.DemoServiceGrpc;
import com.fgedu.grpc.demo.DemoProto.HelloRequest;
import com.fgedu.grpc.demo.DemoProto.HelloReply;

public class DemoClient {
public static void main(String[] args) {
ManagedChannel channel = ManagedChannelBuilder.forAddress(“192.168.1.51”, 9090)
.usePlaintext()
.build();

DemoServiceGrpc.DemoServiceBlockingStub stub = DemoServiceGrpc.newBlockingStub(channel);

HelloRequest request = HelloRequest.newBuilder()
.setName(“fengge”)
.setAge(30)
.build();

HelloReply reply = stub.sayHello(request);
System.out.println(“Reply: ” + reply.getMessage());

channel.shutdown();
}
}

异步调用示例:
DemoServiceGrpc.DemoServiceFutureStub futureStub = DemoServiceGrpc.newFutureStub(channel);
ListenableFuture future = futureStub.sayHello(request);
Futures.addCallback(future, new FutureCallback() {
@Override
public void onSuccess(HelloReply result) {
System.out.println(“Reply: ” + result.getMessage());
}

@Override
public void onFailure(Throwable t) {
t.printStackTrace();
}
}, MoreExecutors.directExecutor());

Python客户端示例

import grpc
import demo_pb2
import demo_pb2_grpc

def run():
channel = grpc.insecure_channel(‘192.168.1.51:9090′)
stub = demo_pb2_grpc.DemoServiceStub(channel)

request = demo_pb2.HelloRequest(name=’fengge’, age=30)
response = stub.SayHello(request)

print(f”Reply: {response.message}”)
channel.close()

if __name__ == ‘__main__’:
run()

流式调用示例:
def run_stream():
channel = grpc.insecure_channel(‘192.168.1.51:9090′)
stub = demo_pb2_grpc.DemoServiceStub(channel)

request = demo_pb2.HelloRequest(name=’fengge’, age=30)
responses = stub.SayHelloStream(request)

for response in responses:
print(f”Stream Reply: {response.message}”)

channel.close()

Go客户端示例

package main

import (
“context”
“log”
“time”

“google.golang.org/grpc”
“google.golang.org/grpc/credentials/insecure”
pb “fgedu.net.cn/grpc/demo”
)

func main() {
conn, err := grpc.Dial(“192.168.1.51:9090”, grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
log.Fatalf(“Failed to connect: %v”, err)
}
defer conn.Close()

client := pb.NewDemoServiceClient(conn)

ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
defer cancel()

response, err := client.SayHello(ctx, &pb.HelloRequest{
Name: “fengge”,
Age: 30,
})
if err != nil {
log.Fatalf(“Failed to call: %v”, err)
}

log.Printf(“Reply: %s”, response.Message)
}

流式调用示例:
func runStream(client pb.DemoServiceClient) {
stream, err := client.SayHelloBidiStream(context.Background())
if err != nil {
log.Fatalf(“Failed to create stream: %v”, err)
}

waitc := make(chan struct{})

go func() {
for {
response, err := stream.Recv()
if err == io.EOF {
close(waitc)
return
}
if err != nil {
log.Fatalf(“Failed to receive: %v”, err)
}
log.Printf(“Stream Reply: %s”, response.Message)
}
}()

for i := 0; i < 5; i++ { err := stream.Send(&pb.HelloRequest{ Name: fmt.Sprintf("fengge-%d", i), Age: int32(30 + i), }) if err != nil { log.Fatalf("Failed to send: %v", err) } } stream.CloseSend() <-waitc }

10. 常见问题与解决方案

问题1:连接超时

症状:客户端连接服务端超时

解决方案:
1. 检查服务端是否启动:
$ netstat -tlnp | grep 9090

2. 检查防火墙端口:
# firewall-cmd –add-port=9090/tcp –permanent
# firewall-cmd –reload

3. 增加连接超时时间:
ManagedChannel channel = ManagedChannelBuilder.forAddress(“192.168.1.51”, 9090)
.usePlaintext()
.keepAliveTime(30, TimeUnit.SECONDS)
.keepAliveTimeout(10, TimeUnit.SECONDS)
.build();

4. 检查网络连通性:
$ telnet 192.168.1.51 9090

问题2:消息过大

症状:报错”Received message larger than max”

解决方案:
1. 增加服务端消息大小限制:
Server server = ServerBuilder.forPort(9090)
.maxInboundMessageSize(16 * 1024 * 1024) // 16MB
.build();

2. 增加客户端消息大小限制:
ManagedChannel channel = ManagedChannelBuilder.forAddress(“192.168.1.51”, 9090)
.maxInboundMessageSize(16 * 1024 * 1024) // 16MB
.build();

3. 考虑使用流式传输处理大消息

问题3:序列化错误

症状:报错”Protocol message contained an invalid tag”

解决方案:
1. 确保客户端和服务端使用相同版本的proto文件
2. 检查proto字段编号是否正确
3. 确保编译后的代码是最新的

重新编译proto文件:
$ protoc –java_out=src/main/java proto/demo.proto
$ protoc –grpc-java_out=src/main/java proto/demo.proto

问题4:TLS证书错误

症状:报错”TLS handshake failed”

解决方案:
1. 检查证书文件路径是否正确
2. 检查证书是否过期:
$ openssl x509 -in server.crt -noout -dates

3. 重新生成证书:
$ openssl req -new -key server.key -out server.csr \
-subj “/C=CN/ST=Beijing/L=Beijing/O=/CN=fgedu.net.cn”
$ openssl x509 -req -in server.csr -signkey server.key -out server.crt -days 365

4. 使用正确的TLS配置:
ManagedChannel channel = ManagedChannelBuilder.forAddress(“192.168.1.51”, 9090)
.useTransportSecurity()
.build();

gRPC服务管理命令

启动服务:
$ java -jar grpc-server.jar

后台启动:
$ nohup java -jar grpc-server.jar > /fgeudb/logs/grpc-server.log 2>&1 &

停止服务:
$ kill -15 $(cat /fgeudb/grpc/server.pid)

查看服务日志:
$ tail -f /fgeudb/logs/grpc-server.log

健康检查:
$ grpc_health_probe -addr=localhost:9090

查看服务列表:
$ grpcurl -plaintext localhost:9090 list

调用测试:
$ grpcurl -plaintext -d ‘{“name”: “test”}’ localhost:9090 demo.DemoService/SayHello

生产环境建议
1. 使用gRPC 1.78.x或1.80.x稳定版本;2. 生产环境必须启用TLS加密;3. 配置合理的消息大小限制;4. 使用Keep-Alive保持连接活跃;5. 配置客户端负载均衡实现高可用;6. 使用拦截器实现认证和日志;7. 配置合理的线程池大小;8. 使用流式传输处理大数据;9. 配置健康检查和监控告警;10. 定期更新证书和依赖版本。

本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html

联系我们

在线咨询:点击这里给我发消息

微信号:itpux-com

工作日:9:30-18:30,节假日休息