🚙

💨 💨 💨

×

  • Categories

  • Archives

  • Tags

  • About

多人快节奏游戏三之实体插值

Posted on 07-18-2016 | In GS

原文出处

Fast-Paced Multiplayer (Part III): Entity Interpolation


Introduction

In the first article of the series, we introduced the concept of an authoritative server and its usefulness to prevent client cheats. However, using this technique naively can lead to potentially showstopper issues regarding playability and responsiveness. In the second article, we proposed client-side prediction as a way to overcome these limitations.


The net result of these two articles is a set of concepts and techniques that allow a player to control an in-game character in a way that feels exactly like a single-player game, even when connected to an authoritative server through an internet connection with transmission delays.


In this article, we’ll explore the consequences of having other player-controled characters connected to the same server.


Server time step


In the previous article, the behavior of the server we described was pretty simple – it read client inputs, updated the game state, and sent it back to the client. When more than one client is connected, though, the main server loop is somewhat different.


In this scenario, several clients may be sending inputs simultaneously, and at a fast pace (as fast as the player can issue commands, be it pressing arrow keys, moving the mouse or clicking the screen). Updating the game world every time inputs are received from each client and then broadcasting the game state would consume too much CPU and bandwidth.


A better approach is to queue the client inputs as they are received, without any processing. Instead, the game world is updated periodically at low frequency, for example 10 times per second. The delay between every update, 100ms in this case, is called thetime step. In every update loop iteration, all the unprocessed client input is applied (possibly in smaller time increments than the time step, to make physics more predictable), and the new game state is broadcast to the clients.


In summary, the game world updates independent of the presence and amount of client input, at a predictable rate.


Dealing with low-frequency updates


From the point of view of a client, this approach works as smoothly as before – client-side prediction works independently of the update delay, so it clearly also works under predictable, if relatively infrequent, state updates. However, since the game state is broadcast at a low frequency (continuing with the example, every 100ms), the client has very sparse information about the other entities that may be moving throughout the world.


A first implementation would update the position of other characters when it receives a state update; this immediately leads to very choppy movement, that is, discrete jumps every 100ms instead of smooth movement.



Client 1 as seen by Client 2.
Client 1 as seen by Client 2.


Depending on the type of game you’re developing there are many ways to deal with this; in general, the more predictable your game entities are, the easier it is to get it right.


Dead reckoning


Suppose you’re making a car racing game. A car that goes really fast is pretty predictable – for example, if it’s running at 100 meters per second, a second later it will be roughly 100 meters ahead of where it started.


Why “roughly”? During that second the car could have accelerated or decelerated a bit, or turned to the right or to the left a bit – the key word here is “a bit”. The maneuverability of a car is such that at high speeds its position at any point in time is highly dependent on its previous position, speed and direction, regardless of what the player actually does. In other words, a racing car can’t do a 180º turn instantly.


How does this work with a server that sends updates every 100 ms? The client receives authoritative speed and heading for every competing car; for the next 100 ms it won’t receive any new information, but it still needs to show them running. The simplest thing to do is to assume the car’s heading and acceleration will remain constant during that 100 ms, and run the car physics locally with that parameters. Then, 100 ms later, when the server update arrives, the car’s position is corrected.


The correction can be big or relatively small depending on a lot of factors. If the player does keep the car on a straight line and doesn’t change the car speed, the predicted position will be exactly like the corrected position. On the other hand, if the player crashes against something, the predicted position will be extremely wrong.


Note that dead reckoning can be applied to low-speed situations – battleships, for example. In fact, the term “dead reckoning” has its origins in marine navigation.


Entity interpolation


There are some situations where dead reckoning can’t be applied at all – in particular, all scenarios where the player’s direction and speed can change instantly. For example, in a 3D shooter, players usually run, stop, and turn corners at very high speeds, making dead reckoning essentially useless, as positions and speeds can no longer be predicted from previous data.


You can’t just update player positions when the server sends authoritative data; you’d get players who teleport short distances every 100 ms, making the game unplayable.


What you do have is authoritative position data every 100 ms; the trick is how to show the player what happens inbetween. The key to the solution is to show the other players in the past relative to the user’s player.


Say you receive position data at t = 1000. You already had received data at t = 900, so you know where the player was at t = 900 and t = 1000. So, from t = 1000 and t = 1100, you show what the other player did from t = 900 to t = 1000. This way you’re always showing the user actual movement data, except you’re showing it 100 ms “late”.



Client 2 renders Client 1 in the past, interpolating last known positions.
Client 2 renders Client 1 “in the past”, interpolating last known positions.


The position data you use to interpolate from t = 900 to t = 1000 depends on the game. Interpolation usually works well enough. If it doesn’t, you can have the server send more detailed movement data with each update – for example, a sequence of straight segments followed by the player, or positions sampled every 10 ms which look better when interpolated (you don’t need to send 10 times more data – since you’re sending deltas for small movements, the format on the wire can be heavily optimized for this particular case).


Note that using this technique, every player sees a slightly different rendering of the game world, because each player sees itself in the present but sees the other entities in the past. Even for a fast paced game, however, seeing other entities with a 100 ms isn’t generally noticeable.


There are exceptions – when you need a lot of spatial and temporal accuracy, such as when the player shoots at another player. Since the other players are seen in the past, you’re aiming with a 100 ms delay – that is, you’re shooting where your target was 100 ms ago! We’ll deal with this in the next article.


Summary


In a client-server environment with an authoritative server, infrequent updates and network delay, you must still give players the illusion of continuity and smooth movement. In part 2 of the series we explored a way to show the user controlled player’s movement in real time using client-side prediction and server reconciliation; this ensures user input has an immediate effect on the local player, removing a delay that would render the game unplayable.


Other entities are still a problem, however. In this article we explored two ways of dealing with them.


The first one, dead reckoning, applies to certain kinds of simulations where entity position can be acceptably estimated from previous entity data such as position, speed and acceleration. This approach fails when these conditions aren’t met.


The second one, entity interpolation, doesn’t predict future positions at all – it uses only real entity data provided by the server, thus showing the other entities slightly delayed in time.


The net effect is that the user’s player is seen in the present and the other entities are seen in the past. This usually creates an incredibly seamless experience.


However, if nothing else is done, the illusion breaks down when an event needs high spatial and temporal accuracy, such as shooting at a moving target: the position where Client 2 renders Client 1 doesn’t match the server’s nor Client 1′s position, so headshots become impossible! Since no game is complete without headshots, we’ll deal with this issue in the next article.

多人快节奏游戏二之客户端预测与服务器修正

Posted on 07-17-2016 | In GS

原文出处

Fast-Paced Multiplayer (Part II): Client-Side Prediction and Server Reconciliation


Introduction

In the first article of this series, we explored a client-server model with an authoritative server and dumb clients that just send inputs to the server and then render the updated game state when the server sends it.


A naive implementation of this scheme leads to a delay between user commands and changes on the screen; for example, the player presses the right arrow key, and the character takes half a second before it starts moving. This is because the client input must first travel to the server, the server must process the input and calculate a new game state, and the updated game state must reach the client again.



Effect of network delays.
Effect of network delays.


In a networked environment such as the internet, where delays can be in the orders of tenths of a second, a game may feel unresponsive at best, or in the worst case, be rendered unplayable. In this article, we’ll find ways to minimize or even eliminate that problem.


Client-side prediction


Even though there are some cheating players, most of the time the game server is processing valid requests (from non-cheating clients and from cheating clients who aren’t cheating at that particular time). This means most of the input received will be valid and will update the game state as expected; that is, if your character is at (10, 10) and the right arrow key is pressed, it will end up at (11, 10).


We can use this to our advantage. If the game world is deterministic enough (that is, given a game state and a set of inputs, the result is completely predictable),


Let’s suppose we have a 100 ms lag, and the animation of the character moving from one square to the next takes 100 ms. Using the naive implementation, the whole action would take 200 ms:



Network delay + animation.
Network delay + animation.


Since the world is deterministic, we can assume the inputs we send to the server will be executed successfully. Under this assumption, the client can predict the state of the game world after the inputs are processed, and most of the time this will be correct.


Instead of sending the inputs and waiting for the new game state to start rendering it, we can send the input and start rendering the outcome of that inputs as if they had succeded, while we wait for the server to send the “true” game state – which more often than not, will match the state calculated locally :



Animation plays while the server confirms the action.
Animation plays while the server confirms the action.


Now there’s absolutely no delay between the player’s actions and the results on the screen, while the server is still authoritative (if a hacked client would send invalid inputs, it could render whatever it wanted on the screen, but it wouldn’t affect the state of the server, which is what the other players see).


Synchronization issues


In the example above, I chose the numbers carefully to make everything work fine. However, consider a slightly modified scenario: let’s say we have a 250 ms lag to the server, and moving from a square to the next takes 100 ms. Let’s also say the player presses the right key 2 times in a row, trying to move 2 squares to the right.


Using the techniques so far, this is what would happen:



Predicted state and authoritative state mismatch.
Predicted state and authoritative state mismatch.


We run into an interesting problem at t = 250 ms, when the new game state arrives. The predicted state at the client is x = 12, but the server says the new game state is x = 11. Because the server is authoritative, the client must move the character back to x = 11. But then, a new server state arrives at t = 350, which says x = 12, so the character jumps again, forward this time.


From the point of view of the player, he pressed the right arrow key twice; the character moved two squares to the right, stood there for 50 ms, jumped one square to the left, stood there for 100 ms, and jumped one square to the right. This, of course, is unacceptable.


Server reconciliation


The key to fix this problem is to realize that the client sees the game world in present time, but because of lag, the updates it gets from the server are actually the state of the game in the past. By the time the server sent the updated game state, it hadn’t processed all the commands sent by the client.


This isn’t terribly difficult to work around, though. First, the client adds a sequence number to each request; in our example, the first key press is request #1, and the second key press is request #2. Then, when the server replies, it includes the sequence number of the last input it processed:



Client-side prediction + server reconciliation.
Client-side prediction + server reconciliation.



或许这幅图更加解释得清楚些
或许这幅图更加解释得清楚些


Now, at t = 250, the server says “based on what I’ve seen up to your request #1, your position is x = 11”. Because the server is authoritative, it sets the character position at x = 11. Now let’s assume the client keeps a copy of the requests it sends to the server. Based on the new game state, it knows the server has already processed request #1, so it can discard that copy. But it also knows the server still has to send back the result of processing request #2. So applying client-side prediction again, the client can calculate the “present” state of the game based on the last authoritative state sent by the server, plus the inputs the server hasn’t processed yet.


So, at t = 250, the client gets “x = 11, last processed request = #1”. It discards its copies of sent input up to #1 – but it retains a copy of #2, which hasn’t been acknowledged by the server. It updates it internal game state with what the server sent, x = 11, and then applies all the input still not seen by the server – in this case, input #2, “move to the right”. The end result is x = 12, which is correct.


Continuing with our example, at t = 350 a new game state arrives from the server; this time it says “x = 12, last processed request = #2”. At this point, the client discards all input up to #2, and updates the state with x = 12. There’s no unprocessed input to replay, so processing ends there, with the correct result.


Odds and ends


The example discussed above implies movement, but the same principle can be applied to almost anything else. For example, in a turn-based combat game, when the player attacks another character, you can show blood and a number representing the damage done, but you shouldn’t actually update the health of the character until the server says so.


Because of the complexities of game state, which isn’t always easily reversible, you may want to avoid killing a character until the server says so, even if its health dropped below zero in the client’s game state (what if the other character used a first-aid kit just before receiving your deadly attack, but the server hasn’t told you yet?)


This brings us to an interesting point – even if the world is completely deterministic and no clients cheat at all, it’s still possible that the state predicted by the client and the state sent by the server don’t match after a reconciliation. The scenario is impossible as described above with a single player, but it’s easy to run into when several players are connected to the server at once. This will be the topic of the next article.


Summary


When using an authoritative server, you need to give the player the illusion of responsiveness, while you wait for the server to actually process your inputs. To do this, the client simulates the results of the inputs. When the updated server state arrives, the predicted client state is recomputed from the updated state and the inputs the client sent but the server hasn’t acknowledged yet.

多人快节奏游戏一之C/S游戏架构

Posted on 07-16-2016 | In GS

原文出处

Fast-Paced Multiplayer (Part I): Client-Server Game Architecture


Introduction

This is the first in a series of articles exploring the techniques and algorithms that make fast-paced multiplayer games possible. If you’re familiar with the concepts behind multiplayer games, you can safely skip to the next article – what follows is an introductory discussion.


Developing any kind of game is itself challenging; multiplayer games, however, add a completely new set of problems to be dealt with. Interestingly enough, the core problems are human nature and physics!


The problem of cheating


It all starts with cheating.


As a game developer, you usually don’t care whether a player cheats in your single-player game – his actions don’t affect anyone but him. A cheating player may not experience the game exactly as you planned, but since it’s their game, they have the right to play it in any way they please.


Multiplayer games are different, though. In any competitive game, a cheating player isn’t just making the experience better for himself, he’s also making the experience worse for the other players. As the developer, you probably want to avoid that, since it tends to drive players away from your game.


There are many things that can be done to prevent cheating, but the most important one (and probably the only really meaningful one) is simple : don’t trust the player. Always assume the worst – that players will try to cheat.


Authoritative servers and dumb clients


This leads to a seemingly simple solution – you make everything in your game happen in a central server under your control, and make the clients just privileged spectators of the game. In other words, your game client sends inputs (key presses, commands) to the server, the server runs the game, and you send the results back to the clients. This is usually called using an authoritative server, because the one and only authority regarding everything that happens in the world is the server.


Of course, your server can be exploited for vulnerabilities, but that’s out of the scope of this series of articles. Using an authoritative server does prevent a wide range of hacks, though. For example, you don’t trust the client with the health of the player; a hacked client can modify its local copy of that value and tell the player it has 10000% health, but the server knows it only has 10% – when the player is attacked it will die, regardless of what a hacked client may think.


You also don’t trust the player with its position in the world. If you did, a hacked client would tell the server “I’m at (10,10)” and a second later “I’m at (20,10)”, possibly going through a wall or moving faster than the other players. Instead, the server knows the player is at (10,10), the client tells the server “I want to move one square to the right”, the server updates its internal state with the new player position at (11,10), and then replies to the player “You’re at (11, 10)”:



Effect of network delays.
Effect of network delays.


In summary: the game state is managed by the server alone. Clients send their actions to the server. The server updates the game state periodically, and then sends the new game state back to clients, who just render it on the screen.


Dealing with networks


The dumb client scheme works fine for slow turn based games, for example strategy games or poker. It would also work in a LAN setting, where communications are, for all practical purposes, instantaneous. But this breaks down when used for a fast-paced game over a network such as the internet.


Let’s talk physics. Suppose you’re in San Francisco, connected to a server in the NY. That’s approximately 4,000 km, or 2,500 miles (that’s roughly the distance between Lisbon and Moscow). Nothing can travel faster than light, not even bytes on the Internet (which at the lower level are pulses of light, electrons in a cable, or electromagnetic waves). Light travels at approximately 300,000 km/s, so it takes 13 ms to travel 4,000 km.


This may sound quite fast, but it’s actually a very optimistic setup – it assumes data travels at the speed of light in a straight path, with is most likely not the case. In real life, data goes through a series of jumps (called hops in networking terminology) from router to router, most of which aren’t done at lightspeed; routers themselve introduce a bit of delay, since packets must be copied, inspected, and rerouted.


For the sake of the argument, let’s assume data takes 50 ms from client to server. This is close to a best-case scenario – what happens if you’re in NY connected to a server in Tokyo? What if there’s network congestion for some reason? Delays of 100, 200, even 500 ms are not unheard of.


Back to our example, your client sends some input to the server (“I pressed the right arrow”). The server gets it 50 ms later. Let’s say the server processes the request and sends back the updated state immediately. Your client gets the new game state (“You’re now at (1, 0)”) 50 ms later.


From your point of view, what happened is that you pressed the right arrow but nothing happened for a tenth of a second; then your character finally moved one square to the right. This perceived lag between your inputs and its consequences may not sound like much, but it’s noticeable – and of course, a lag of half a second isn’t just noticeable, it actually makes the game unplayable.


Summary


Networked multiplayer games are incredibly fun, but introduce a whole new class of challenges. The authoritative server architecture is pretty good at stopping most cheats, but a straightforward implementation may make games quite unresponsive to the player.


In the following articles, we’ll explore how can we build a system based on an authoritative server, while minimizing the delay experienced by the players, to the point of making it almost indistinguishable from local or single player games.

游戏服务端常用架构三

Posted on 07-11-2016 | In GS

休闲游戏服务器

休闲游戏同战网服务器类似,都是全区架构,不同的是有房间服务器,还有具体的游戏服务器,游戏主体不再以玩家 P2P进行,而是连接到专门的游戏服务器处理:

和战网一样的全区架构,用户数据不能象分区的 RPG那样一次性load到内存,然后在内存里面直接修改。全区架构下,为了应对一个用户同时玩几个游戏,用户数据需要区分基本数据和不同的游戏数据,而游戏数据又需要区分积分数据、和文档数据。胜平负之类的积分可以直接提交增量修改,而更为普遍的文档类数据则需要提供读写令牌,写令牌只有一块,读令牌有很多块。同帐号同一个游戏同时在两台电脑上玩时,最先开始的那个游戏获得写令牌,可以操作任意的用户数据。而后开始的那个游戏除了可以提交胜平负积分的增量改变外,对用户数据采用只读的方式,保证游戏能运行下去,但是会提示用户,游戏数据锁定。

. . .

游戏服务端常用架构二

Posted on 07-11-2016 | In GS

第三代游戏服务器 2007

从魔兽世界开始无缝世界地图已经深入人心,比较以往游戏玩家走个几步还需要切换场景,每次切换就要等待 LOADING个几十秒是一件十分破坏游戏体验的事情。于是对于 2005年以后的大型 MMORPG来说,无缝地图已成为一个标准配置。比较以往按照地图来切割游戏而言,无缝世界并不存在一块地图上面的人有且只由一台服务器处理了:

每台 Node服务器用来管理一块地图区域,由 NodeMaster(NM)来为他们提供总体管理。更高层次的 World则提供大陆级别的管理服务。这里省略若干细节服务器,比如传统数据库前端,登录服务器,日志和监控等,统统用 ADMIN概括。在这样的结构下,玩家从一块区域走向另外一块区域需要简单处理一下:

玩家1完全由节点A控制,玩家3完全由节点B控制。而处在两个节点边缘的2号玩家,则同时由A和B提供服务。玩家2从A移动到B的过程中,会同时向A请求左边的情况,并向B请求右边的情况。但是此时玩家2还是属于A管理。直到玩家2彻底离开AB边界很远,才彻底交由B管理。按照这样的逻辑将世界地图分割为一块一块的区域,交由不同的 Node去管理。

对于一个 Node所负责的区域,地理上没必要连接在一起,比如大陆的四周边缘部分和高山部分的区块人比较少,可以统一交给一个Node去管理,而这些区块在地理上并没有联系在一起的必要性。一个 Node到底管理哪些区块,可以根据游戏实时运行的负载情况,定时维护的时候进行更改 NodeMaster 上面的配置。

于是碰到第一个问题是很多 Node服务器需要和玩家进行通信,需要问管理服务器特定UID为多少的玩家到底在哪台 Gate上,以前按场景切割的服务器这个问题不大,问了一次以后就可以缓存起来了,但是现在服务器种类增加不少,玩家又会飘来飘去,按UID查找玩家比较麻烦;另外一方面 GATE需要动态根据坐标计算和哪些 Node通信,导致逻辑越来越厚,于是把:“用户对象”从负责连接管理的 GATE中切割出来势在必行于是有了下面的模型:

. . .

游戏服务端常用架构一

Posted on 07-11-2016 | In GS

卡牌、跑酷等弱交互服务端

卡牌跑酷类因为交互弱,玩家和玩家之间不需要实时面对面PK,打一下对方的离线数据,计算下排行榜,买卖下道具即可,所以实现往往使用简单的 HTTP服务器:

登录时可以使用非对称加密(RSA, DH),服务器根据客户端uid,当前时间戳还有服务端私钥,计算哈希得到的加密 key 并发送给客户端。之后双方都用 HTTP通信,并用那个key进行RC4加密。客户端收到key和时间戳后保存在内存,用于之后通信,服务端不需要保存 key,因为每次都可以根据客户端传上来的 uid 和时间戳以及服务端自己的私钥计算得到。用模仿 TLS的行为,来保证多次 HTTP请求间的客户端身份,并通过时间戳保证同一人两次登录密钥不同。

每局开始时,访问一下,请求一下关卡数据,玩完了又提交一下,验算一下是否合法,获得什么奖励,数据库用单台 MySQL或者 MongoDB即可,后端的 Redis做缓存(可选)。如果要实现通知,那么让客户端定时15秒轮询一下服务器,如果有消息就取下来,如果没消息可以逐步放长轮询时间,比如30秒;如果有消息,就缩短轮询时间到10秒,5秒,即便两人聊天,延迟也能自适应。

此类服务器用来实现一款三国类策略或者卡牌及酷跑的游戏已经绰绰有余,这类游戏因为逻辑简单,玩家之间交互不强,使用 HTTP来开发的话,开发速度快,调试只需要一个浏览器就可以把逻辑调试清楚了。

第一代游戏服务器 1978

1978年,英国著名的财经学校University of Essex的学生 Roy Trubshaw编写了世界上第一个MUD程序《MUD1》,在University of Essex于1980年接入 ARPANET之后加入了不少外部的玩家,甚至包括国外的玩家。《MUD1》程序的源代码在 ARPANET共享之后出现了众多的改编版本,至此MUD才在全世界广泛流行起来。不断完善的 MUD1的基础上产生了开源的 MudOS(1991),成为众多网游的鼻祖:

MUDOS采用 C语言开发,因为玩家和玩家之间有比较强的交互(聊天,交易,PK),MUDOS使用单线程无阻塞套接字来服务所有玩家,所有玩家的请求都发到同一个线程去处理,主线程每隔1秒钟更新一次所有对象(网络收发,更新对象状态机,处理超时,刷新地图,刷新NPC)。

. . .

阅读开源服务器源码基础

Posted on 07-04-2016 | In Linux

当阅读一些开源服务器源码的时候, 如果不知道以下知识, 就会有知识盲点, 导致不知所云.
这篇博客会讲述一些相关的编程知识点, 把之前的笔记总结一下.
还是那句老话, 带着问题阅读是最容易让人类迅速进入状态的.

进程的内存布局是什么样的?

记忆口诀 : 文初堆栈

这里写图片描述

每个进程所分配的内存由很多部分组成,通常称之为“段( segment)”。如下所示。

  • 文本段
    包含了进程运行的程序机器语言指令。文本段具有只读属性,以防止进程通过错
    误指针意外修改自身指令。因为多个进程可同时运行同一程序,所以又将文本段设为可
    共享,这样,一份程序代码的拷贝可以映射到所有这些进程的虚拟地址空间中。
  • 初始化数据段
    包含显式初始化的全局变量和静态变量。当程序加载到内存时,从可执
    行文件中读取这些变量的值。
  • 未初始化数据段
    包含了未进行显式初始化的全局变量和静态变量。程序启动之前,系统
    将本段内所有内存初始化为0。出于历史原因,此段常被称为BSS段,这源于老版本的
    汇编语言助记符“block started by symbol”o将经过初始化的全局变量和静态变量与未经
    初始化的全局变量和静态变量分开存放,其主要原因在于程序在磁盘上存储时,没有必
    要为未经初始化的变量分配存储空间。相反,可执行文件只需记录未初始化数据段的位
    置及所需大小,直到运行时再由程序加载器来分配这一空间。
  • 堆(heap)
    是可在运行时(为变量)动态进行内存分配的一块区域盛顶端称作program break。
  • 栈( stack)
    是一个动态增长和收缩的段,由栈帧(stack frames)组成。系统会为每个
    当前调用的函数分配一个栈帧。栈帧中存倍了函数的局部变量(所谓自动变量)、实
    参和返回值。

线程的同步机制有哪些?

  • 互斥量
  • 条件变量
  • 自旋锁
    自旋锤与互斥量类似,但它不是通过休眠使进程阻塞,而是在获取镪之前一直处于忙等(自
    旋)阻塞状态.自旋锁可用于“下情况锁被持有的时间短,而且线程并不希望在重新调度上花
    费太多的成本。
  • 读写锁(也叫做共享互斥锁)
    读写锁也叫做共享互斥锁( shared-exclusive lock)。当读写锁是读模式锁住时,就可以说成是
    以共享模式锁住的。当它是写模式锁住的时候,就可以说成是以互斥模式锁住的。

. . .

MongoDB能取代MySQL或者Redis能取代memcached么

Posted on 06-30-2016 | In DB

mongodb和memcached不是一个范畴内的东西。

mongodb是文档型的非关系型数据库,其优势在于查询功能比较强大,能存储海量数据。

mongodb和memcached不存在谁替换谁的问题。和memcached更为接近的是redis。

它们都是内存型数据库,数据保存在内存中,通过tcp直接存取,优势是速度快,并发高,缺点是数据> 类型有限,查询功能不强,一般用作缓存。

一般现在的项目中,用redis来替代memcached。

. . .

1…151617181920212223242526272829303132333435…37
Mike

Mike

🚙 🚗 💨 💨 If you want to create a blog like this, just follow my open-source project, "hexo-theme-neo", click the GitHub button below and check it out ^_^ . It is recommended to use Chrome, Safari, or Edge to read this blog since this blog was developed on Edge (Chromium kernel version) and tested on Safari.

11 categories
289 posts
111 tags
about
GitHub Spotify
© 2013 - 2025 Mike