# 019. 对项目的主从 redis 架构进行 QPS 压测以及水平扩容支撑更高 QPS

你如果要对自己刚刚搭建好的 redis 做一个基准的压测,测一下你的 redis 的性能和 QPS(query per second 每秒查询次数)

redis 自己提供的 redis-benchmark 压测工具,是最快捷最方便的,用一些简单的操作和场景去压测(主要是简单)

工具路径:/usr/local/redis-3.2.8/src/redis-benchmark

语法如下,除了下面自带的 三个控制并发,次数,大小等,其他的自带的链接参数也是需要的,如你配置了密码。需要带上 -a redis-pass密码

./redis-benchmark

-c <clients>       Number of parallel connections (default 50)
-n <requests>      Total number of requests (default 100000)
-d <size>          Data size of SET/GET value in bytes (default 2)
1
2
3
4
5

比如:你的高峰期的访问量,在高峰期,瞬时最大用户量会达到 10 万+,供访问 1000万 次,每次 50 byte 数据 -c 100000,-n 10000000,-d 50

下面是对我的刚才部署好的 redis 测试的数据,这个测试还是需要几分钟时间

# eshop-cace01 压测数据

1 核 1G,虚拟机,

/usr/local/redis-3.2.8/src
[root@eshop-cache01 src]# ./redis-benchmark -a redis-pass
以下数据是粘贴的老师的,数据太多,分屏复制都要复制好多次
====== PING_INLINE ======
  100000 requests completed in 1.28 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.78% <= 1 milliseconds
99.93% <= 2 milliseconds
99.97% <= 3 milliseconds
100.00% <= 3 milliseconds
78308.54 requests per second

====== PING_BULK ======
  100000 requests completed in 1.30 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.87% <= 1 milliseconds
100.00% <= 1 milliseconds
76804.91 requests per second

====== SET ======
  100000 requests completed in 2.50 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

5.95% <= 1 milliseconds
99.63% <= 2 milliseconds
99.93% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
40032.03 requests per second    // 比如这个 set 操作,QPS 为 4 万多

====== GET ======
  100000 requests completed in 1.30 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.73% <= 1 milliseconds
100.00% <= 2 milliseconds
100.00% <= 2 milliseconds
76628.35 requests per second  // 比如这个 get 操作,QPS 为 7 万多

====== INCR ======
  100000 requests completed in 1.90 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

80.92% <= 1 milliseconds
99.81% <= 2 milliseconds
99.95% <= 3 milliseconds
99.96% <= 4 milliseconds
99.97% <= 5 milliseconds
100.00% <= 6 milliseconds
52548.61 requests per second

====== LPUSH ======
  100000 requests completed in 2.58 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

3.76% <= 1 milliseconds
99.61% <= 2 milliseconds
99.93% <= 3 milliseconds
100.00% <= 3 milliseconds
38684.72 requests per second

====== RPUSH ======
  100000 requests completed in 2.47 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

6.87% <= 1 milliseconds
99.69% <= 2 milliseconds
99.87% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
40469.45 requests per second

====== LPOP ======
  100000 requests completed in 2.26 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

28.39% <= 1 milliseconds
99.83% <= 2 milliseconds
100.00% <= 2 milliseconds
44306.60 requests per second

====== RPOP ======
  100000 requests completed in 2.18 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

36.08% <= 1 milliseconds
99.75% <= 2 milliseconds
100.00% <= 2 milliseconds
45871.56 requests per second

====== SADD ======
  100000 requests completed in 1.23 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.94% <= 1 milliseconds
100.00% <= 2 milliseconds
100.00% <= 2 milliseconds
81168.83 requests per second

====== SPOP ======
  100000 requests completed in 1.28 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.80% <= 1 milliseconds
99.96% <= 2 milliseconds
99.96% <= 3 milliseconds
99.97% <= 5 milliseconds
100.00% <= 5 milliseconds
78369.91 requests per second

====== LPUSH (needed to benchmark LRANGE) ======
  100000 requests completed in 2.47 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

15.29% <= 1 milliseconds
99.64% <= 2 milliseconds
99.94% <= 3 milliseconds
100.00% <= 3 milliseconds
40420.37 requests per second

====== LRANGE_100 (first 100 elements) ======
  100000 requests completed in 3.69 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

30.86% <= 1 milliseconds
96.99% <= 2 milliseconds
99.94% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
27085.59 requests per second

====== LRANGE_300 (first 300 elements) ======
  100000 requests completed in 10.22 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.03% <= 1 milliseconds
5.90% <= 2 milliseconds
90.68% <= 3 milliseconds
95.46% <= 4 milliseconds
97.67% <= 5 milliseconds
99.12% <= 6 milliseconds
99.98% <= 7 milliseconds
100.00% <= 7 milliseconds
9784.74 requests per second

====== LRANGE_500 (first 450 elements) ======
  100000 requests completed in 14.71 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.00% <= 1 milliseconds
0.07% <= 2 milliseconds
1.59% <= 3 milliseconds
89.26% <= 4 milliseconds
97.90% <= 5 milliseconds
99.24% <= 6 milliseconds
99.73% <= 7 milliseconds
99.89% <= 8 milliseconds
99.96% <= 9 milliseconds
99.99% <= 10 milliseconds
100.00% <= 10 milliseconds
6799.48 requests per second

====== LRANGE_600 (first 600 elements) ======
  100000 requests completed in 18.56 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.00% <= 2 milliseconds
0.23% <= 3 milliseconds
1.75% <= 4 milliseconds
91.17% <= 5 milliseconds
98.16% <= 6 milliseconds
99.04% <= 7 milliseconds
99.83% <= 8 milliseconds
99.95% <= 9 milliseconds
99.98% <= 10 milliseconds
100.00% <= 10 milliseconds
5387.35 requests per second

====== MSET (10 keys) ======
  100000 requests completed in 4.02 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.01% <= 1 milliseconds
53.22% <= 2 milliseconds
99.12% <= 3 milliseconds
99.55% <= 4 milliseconds
99.70% <= 5 milliseconds
99.90% <= 6 milliseconds
99.95% <= 7 milliseconds
100.00% <= 8 milliseconds
24869.44 requests per second
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227

# eshop-cace02 压测数据

由于数据太长就不粘贴了

# redis 支持的 QPS 数据说明

这个很难给出一个数据,刚才也看到了,这个与机器配置,场景(复制操作?简单操作?数据量大小)都有关系。

如搭建一些集群,专门为某个项目,搭建的专用集群,4 核 4G 内存,比较复杂的操作,数据比较大,几万的 QPS 单机做到,差不多了

一般来说 redis 提供的高并发,至少上万,没问题

看看和那些有关系?

  • 机器配置
  • 操作的复杂度
  • 数据量的大小
  • 网络带宽/网络开销

所以具体是多少 QPS 需要自己去测试,而且与生产环境可能还不一致, 因为有大量的网络请求的调用。网络开销等

后面讲到的商品详情页的 cache,可能是需要把大串数据拼接在一起,作为一个 json 串,大小可能都几 KB 了,所以根据环境的不同,QPS 也会不一样

# 水平扩容 redis 读节点,提升吞吐量

就按照上一节课讲解的,再在其他服务器上搭建 redis 从节点,假设单个从节点读请 QPS 在 5 万左右,两个 redis 从节点,所有的读请求打到两台机器上去,承载整个集群读 QPS 在 10万+