Boost.Asio入门
首先,让我们先来了解一下什么是Boost.Asio?怎么编译它?
linux下直接 : sudo apt-get install libboost-all-dev
什么是Boost.Asio
简单来说,Boost.Asio是一个跨平台的、主要用于网络和其他一些底层输入/输出编程的C++库。
. . .
RPC 的全称是 Remote Procedure Call 是一种进程间通信方式。它允许程序调用另一个地址空间(通常是共享网络的另一台机器上)的过程或函数,而不用程序员显式编码这个远程调用的细节。即程序员无论是调用本地的还是远程的,本质上编写的调用代码基本相同。 像腾讯的phxrpc框架是使用Protobuf作为IDL用于描述RPC接口以及通信数据结构
1、一套完善的序列化框架;在不同的进程间传输数据,序列化是第一步,如何可靠且方便地将对象转化为二进制(或者其他格式),在对端则是如何正确且安全地将其从二进制恢复为对象。
2、完善的底层通信协议;其需要提供合适的语义抽象:服务端支持怎样的并发,是单客户单访问,还是多访问;而客户端的并发模型由服务端决定。当然,还需要健壮且足够的接口抽象,毕竟分布式环境,“一切皆有可能”,需要应对各种问题。
3、一个可用的反射系统。是的,需要在C++环境下建立一个反射系统。这一步是最为关键的,其由C++11支持。因为,我们需要注册一个类的各种信息,以供RPC调用。
Hi, I’m Glenn Fiedler and welcome to Networking for Game Programmers.
Have you ever wondered how multiplayer games work?
From the outside it seems magical: two or more players sharing a consistent experience across the network like they actually exist together in the same virtual world.
But as programmers we know the truth of what is actually going on underneath is quite different from what you see. It turns out it’s all an illusion. A massive sleight-of-hand. What you perceive as a shared reality is only an approximation unique to your own point of view and place in time.
In the beginning games were networked peer-to-peer, with each each computer exchanging information with each other in a fully connected mesh topology. You can still see this model alive today in RTS games, and interestingly for some reason, perhaps because it was the first way - it’s still how most people think that game networking works.
The basic idea is to abstract the game into a series of turns and a set of command messages when processed at the beginning of each turn direct the evolution of the game state. For example: move unit, attack unit, construct building. All that is needed to network this is to run exactly the same set of commands and turns on each player’s machine starting from a common initial state.
Of course this is an overly simplistic explanation and glosses over many subtle points, but it gets across the basic idea of how networking for RTS games work. You can read more about this networking model here: 1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond.
It seems so simple and elegant, but unfortunately there are several limitations.
First, it’s exceptionally difficult to ensure that a game is completely deterministic; that each turn plays out identically on each machine. For example, one unit could take slightly a different path on two machines, arriving sooner to a battle and saving the day on one machine, while arriving later on the other and erm. not saving the day. Like a butterfly flapping it’s wings and causing a hurricane on the other side of the world, one tiny difference results in complete desynchronization over time.
The next limitation is that in order to ensure that the game plays out identically on all machines it is necessary to wait until all player’s commands for that turn are received before simulating that turn. This means that each player in the game has latency equal to the most lagged player. RTS games typically hide this by providing audio feedback immediately and/or playing cosmetic animation, but ultimately any truly game affecting action may occur only after this delay has passed.
The final limitation occurs because of the way the game synchronizes by sending just the command messages which change the state. In order for this to work it is necessary for all players to start from the same initial state. Typically this means that each player must join up in a lobby before commencing play, although it is technically possible to support late join, this is not common due to the difficulty of capturing and transmitting a completely deterministic starting point in the middle of a live game.
Despite these limitations this model naturally suits RTS games and it still lives on today in games like “Command and Conquer”, “Age of Empires” and “Starcraft”. The reason being that in RTS games the game state consists of many thousands of units and is simply too large to exchange between players. These games have no choice but to exchange the commands which drive the evolution of the game state.
But for other genres, the state of the art has moved on. So that’s it for the deterministic peer-to-peer lockstep networking model. Now lets look at the evolution of action games starting with Doom, Quake and Unreal.
In the era of action games, the limitations of peer-to-peer lockstep became apparent in Doom, which despite playing well over the LAN played terribly over the internet for typical users:
Although it is possible to connect two DOOM machines together across the Internet using a modem link, the resulting game will be slow, ranging from the unplayable (e.g. a 14.4Kbps PPP connection) to the marginally playable (e.g. a 28.8Kbps modem running a Compressed SLIP driver). Since these sorts of connections are of only marginal utility, this document will focus only on direct net connections.
The problem of course was that Doom was designed for networking over LAN only, and used the peer-to-peer lockstep model described previously for RTS games. Each turn player inputs (key presses etc.) were exchanged with other peers, and before any player could simulate a frame all other player’s key presses needed to be received.
In other words, before you could turn, move or shoot you had to wait for the inputs from the most lagged modem player. Just imagine the wailing and gnashing of teeth that this would have resulted in for the sort of folks with internet connections that were “of only marginal utility”. :)
In order to move beyond the LAN and the well connected elite at university networks and large companies, it was necessary to change the model. And in 1996, that’s exactly what John Carmack and his team did when he released Quake using client/server instead of peer-to-peer.
Now instead of each player running the same game code and communicating directly with each other, each player was now a “client” and they all communicated with just one computer called the “server”. There was no longer any need for the game to be deterministic across all machines, because the game really only existed on the server. Each client effectively acted as a dumb terminal showing an approximation of the game as it played out on the server.
In a pure client/server model you run no game code locally, instead sending your inputs such as key presses, mouse movement, clicks to the server. In response the server updates the state of your character in the world and replies with a packet containing the state of your character and other players near you. All the client has to do is interpolate between these updates to provide the illusion of smooth movement and BAM you have a networked game.
This was a great step forward. The quality of the game experience now depended on the connection between the client and the server instead of the most lagged peer in the game. It also became possible for players to come and go in the middle of the game, and the number of players increased as client/server reduced the bandwidth required on average per-player.
But there were still problems with the pure client/server model:
While I can remember and justify all of my decisions about networking from DOOM through Quake, the bottom line is that I was working with the wrong basic assumptions for doing a good internet game. My original design was targeted at < 200ms connection latencies. People that have a digital connection to the internet through a good provider get a pretty good game experience. Unfortunately, 99% of the world gets on with a slip or ppp connection over a modem, often through a crappy overcrowded ISP. This gives 300+ ms latencies, minimum. Client. User's modem. ISP's modem. Server. ISP's modem. User's modem. Client. God, that sucks.
Ok, I made a bad call. I have a T1 to my house, so I just wasn't familliar with PPP life. I'm addressing it now.
The problem was of course latency.
What happened next would change the industry forever.
In the original Quake you felt the latency between your computer and the server. Press forward and you’d wait however long it took for packets to travel to the server and back to you before you’d actually start moving. Press fire and you wait for that same delay before shooting.
If you’ve played any modern FPS like Call of Duty: Modern Warfare, you know this is no longer what happens. So how exactly do modern FPS games remove the latency on your own actions in multiplayer?
When writing about his plans for the soon to be released QuakeWorld, John Carmack said:
I am now allowing the client to guess at the results of the users movement until the authoritative response from the server comes through. This is a biiiig architectural change. The client now needs to know about solidity of objects, friction, gravity, etc. I am sad to see the elegant client-as-terminal setup go away, but I am practical above idealistic.
So now in order to remove the latency, the client runs more code than it previously did. It is no longer a dumb terminal sending inputs to the server and interpolating between state sent back. Instead it is able to predict the movement of your character locally and immediately in response to your input, running a subset of the game code for your player character on the client machine.
Now as soon as you press forward, there is no wait for a round trip between client and server - your character start moving forward right away.
The difficulty of this approach is not in the prediction, for the prediction works just as normal game code does - evolving the state of the game character forward in time according to the player’s input. The difficulty is in applying the correction back from the server to resolve cases when the client and server disagree about where the player character should be and what it is doing.
Now at this point you might wonder. Hey, if you are running code on the client - why not just make the client authoritative over their player character? The client could run the simulation code for their own character and simply tell the server where they are each time they send a packet. The problem with this is that if each player were able to simply tell the server “here is my current position” it would be trivially easy to hack the client such that a cheater could instantly dodge the RPG about to hit them, or teleport instantly behind you to shoot you in the back.
So in FPS games it is absolutely necessary that the server is the authoritative over the state of each player character, in-spite of the fact that each player is locally predicting the motion of their own character to hide latency. As Tim Sweeney writes in The Unreal Networking Architecture: “The Server Is The Man”.
Here is where it gets interesting. If the client and the server disagree, the client must accept the update for the position from the server, but due to latency between the client and server this correction is necessarily in the past. For example, if it takes 100ms from client to server and 100ms back, then any server correction for the player character position will appear to be 200ms in the past, relative to the time up to which the client has predicted their own movement.
If the client were to simply apply this server correction update verbatim, it would yank the client back in time, completely undoing any client-side prediction. How then to solve this while still allowing the client to predict ahead?
The solution is to keep a circular buffer of past character state and input for the local player on the client, then when the client receives a correction from the server, it first discards any buffered state older than the corrected state from the server, and replays the state starting from the corrected state back to the present “predicted” time on the client using player inputs stored in the circular buffer. In effect the client invisibly “rewinds and replays” the last n frames of local player character movement while holding the rest of the world fixed.
This way the player appears to control their own character without any latency, and provided that the client and server character simulation code is reasonable, giving roughly exactly the same result for the same inputs on the client and server, it is rarely corrected. It is as Tim Sweeney describes:
… the best of both worlds: In all cases, the server remains completely authoritative. Nearly all the time, the client movement simulation exactly mirrors the client movement carried out by the server, so the client’s position is seldom corrected. Only in the rare case, such as a player getting hit by a rocket, or bumping into an enemy, will the client’s location need to be corrected.
In other words, only when the player’s character is affected by something external to the local player’s input, which cannot possibly be predicted on the client, will the player’s position need to be corrected. That and of course, if the player is attempting to cheat :)
翻译:黄威(横写、意气风发) 审校:艾涛(轻描一个世界)
介绍
作为一名程序员,你是否曾想过多人游戏是如何运作的呢?
从表面来看这是非常奇妙:两个或者更多的玩家通过网络能够拥有相同的游戏体验,就像他们确实存在于同一个虚拟世界一样。但是作为程序员,我们知道底层运行的情况与你看到的完全不同。事实证明,这完全是一种错觉,是一个精妙的戏法。你能感受到游戏中的玩家都处于同一个世界中,但其实这只是在各个时间点,你自己独有的视角与位置和其他玩家的视角与位置相似。
对等同步
起初,网络游戏形成一个对等的网络,在这个网络中每台电脑在一个完全连接的网状拓扑结构中互相交换信息。如今在RTS游戏(即时战略游戏)中你仍能够看到这一模型,有趣的是,因为某种原因,可能因为它是第一种网络连接方式——大多数人仍认为游戏网络是这样运作的。
基本思想就是在处理数据时将游戏抽象化成一系列的数据改变与一组命令消息,每一个数据改变都决定了游戏状态的演变。例如:移动单位、攻击单位、建造建筑。所有的这一切都要求网络让每一位玩家的机器都从相同的初始状态开始,并且运行完全相同的命令,数据的改变也完全相同。
当然,这只是一个过于简单并且忽略掉了许多微妙细节的解释,但这个解释向我们解释了RTS游戏网络工作的基本原理。你可以点击这里了解更多关于这个网络模型的细节。
这看起来是如此简单而又巧妙,但是不幸的是这个模型有几个限制因素。
首先,要想保证游戏状态完全确定是非常困难的;即每台机器都进行着相同的变动。比如说,一个单位可以在两台机器上走略微不同的道路,在一台机器上玩家更早进入战斗并反败为胜,而在另一台机器上玩家到达的更晚,然后,嗯,没有取得胜利。就像一只蝴蝶扇动了翅膀,然后在世界的另一边导致了飓风的出现,随着时间的过去,一个微小的区别会导致两边完全的不同步。
另一个限制因素就是为了保证游戏在所有机器上表现同步,就有必要在游戏操作在设备上模拟之前进行等待,直到设备接收到了所有玩家对于那个变动的指令。这就意味着游戏中的每一个玩家的延迟都等于延迟最高玩家的延迟。RTS游戏通常代表性地通过立即提供音频反馈与(或是)播放过渡动画来掩盖这段延迟,但是最终真正影响游戏的操作要在这段延迟过去之后才能进行。
最后一个限制因素就在于游戏的同步方式是通过发送改变当前状态的命令消息。为了让其正常工作就有必要让所有的玩家由同一初始状态开始游戏。通常来说,这就意味着每个玩家都要在开始游戏之前进入房间准备游戏,尽管支持让玩家随后加入游戏从技术上来说是可行的,但是由于在一场进行中的游戏中间捕获与传输一个完全确定的起始点的难度很大,所以这种情况并不常见。
尽管有这些限制,这个模型还是很适合RTS游戏的,并且在现代的游戏中它仍然存在,例如“命令与征服”、“帝国时代”与“星际争霸”等。原因就是在RTS游戏中,游戏状态包含了成千上万的单位,并且通常游戏状态太大而不能在玩家之间交换。这些游戏别无选择,只能交换这些驱动着游戏状态改变的指令。
但是对于其他类别的游戏,美工的状态已经改变了。所以对于确定的对等网络同步模型就讲到这里。现在让我们从Doom(毁灭战士)、Quake(雷神之锤)以及Unreal(魔域幻境)中看看动作类游戏的演变。
客户端/服务器(C/S结构)
在动作游戏的时代,对等同步的限制因素在Doom中表现得更加明显,尽管它在局域网中表现很好,但是在面对互联网中的普通用户时表现得很糟糕:
“虽然可以通过调制解调器将两个运行DOOM的设备在互联网上连接在一起,最终游戏将会变得缓慢,延迟的情况在完全不能进行游戏(例如一个14.4Kbps的P2P连接)到略微可玩(例如一个28.8Kbps的调制解调器运行一个压缩驱动程序)之间不等。因为这些类型的连接只有边际效用,本文将只关注于网络连接。(faqs.org)”
这个问题显然就是Doom本来就是只为局域网设计的,并且使用了前面描述的为RTS游戏制作的对等同步模型。每一个玩家输入的行为(关键按键等等)与其他人进行信息交换,只有在所有其他玩家的关键按键都被接收到之后,玩家才能够进行游戏画面的模拟。
换句话说,在你能够操作、移动或是射击之前,你必须等待延迟最高的玩家进行连入。想想这上述的所谓“这些连接只有边际效用”将会导致的令人咬牙切齿的情况。
现在的游戏局限于局域网游戏以及拥有良好连接条件的大学网络或是大公司的精英之间的游戏,为了改变这种情况,是时候改变这个模型了。这就是John Carmack 1996年在发布雷神之锤时所做的事情——他使用客户端/服务器(C/S结构)代替了对等同步模型(P2P)。
现在,玩家们不再运行相同的游戏代码,直接地互相交换数据,如今每个玩家都是一个客户端,他们都与一台叫做“服务器”的电脑进行数据交换。现在的游戏不再有任何对于所有机器都要进行确定的要求,因为游戏实际上只存在于服务器上。每个客户端实际上都是作为哑终端,用来显示出一个游戏的近似情况,因为游戏实际上只在服务器上发生。
在一个纯粹的客户端/服务器模型中你没有在本地运行游戏代码,而是将你的操作例如按键、鼠标移动、点击等发送到服务器。服务器响应并更新了虚拟世界中你的角色状态,然后将一个包含着你与你周围角色状态的数据包传回。所有客户端要做的事情就是在这些数据更新之间插入自己的数据,然后给你一种流畅移动的假象,然后,boom!你就有了一个联网的客户端/服务器游戏了。
这是一个伟大的进步。游戏体验的质量现在取决于客户端与服务器之间的连接,而不是取决于游戏中延迟最高的玩家。这同时让玩家在游戏进行中的加入变成了可能,并且随着客户端/服务器结构对于每位玩家需要的平均带宽减少,游戏玩家也在逐渐增长。
但是对于纯粹的客户端/服务器模型仍然存在一些问题。
“尽管我能记得并整理出我从DOOM到Quake做出的所有关于网络的决定,结果就是尽管我为了做出一个好的网络游戏而努力着,但这些努力都是基于一个错误的基础假设。我原先的设计目标就是使延迟低于200ms。这样的话通过一个好的供应商数字连接到网络的人,就能有一个很好的游戏体验。很不幸,世界上百分之九十就都是通过调制解调器进行SLIP连接或是PPP连接,它们通常是通过一个糟糕拥挤的ISP(网络服务提供者)进行连接的。这就导致了300ms以上的延迟,并且这只是最低值。客户端,使用者的调制解调器,ISP的调制解调器,服务器,再回到ISP的调制解调器,使用者的调制解调器,最后再回到客户端。天呐,这真是糟透了!
好吧,我做了一件错误的事。我在家里都使用T1载体进行联网,所以我对使用P2P的生活并不了解,我现在就解决这个问题。”
问题当然就是延迟。
接下来John在他发布QuakeWorld(雷神世界)时做的事情将永久改变这个行业。
客户端预测
在最初的雷神之锤中,你可以明显感受到你的电脑与服务器之间的延迟。在你向前点击之后,你需要等待数据包发送至服务器然后再传回到你的电脑,然后你才能够开始移动。点击开火,然后你在射击之前同样需要等待上述延迟。
如果你玩过任何像《使命召唤4:现代战争》之类的现代FPS游戏,你就会知道这种情况现在已经不会再出现了。那么现代FPS游戏到底是如何做到在多人游戏中看似消除了你自己行为的延迟呢?
这个问题在历史上分两个部分来解决。第一部分就是JohnCarmack为雷神世界开发的客户端移动预测,它后来被合并作为Tim Sweeney的魔域幻境网络模型的一部分。第二部分就是维尔福公司的Yahn Bernier为反恐精英开发的延迟补偿。在本节中,我们将主要讨论第一部分——如何隐藏玩家移动的延迟。
当谈到他对于即将发布的雷神世界的计划时,JohnCarmack说:
“我们现在允许客户端预测使用者行动的结果,直到服务器传来命令式回复。这是一个非常非常大的架构变化。客户端现在需要知道物体的硬度、摩擦力、重力之类的数据。对于简洁的客户端作为终端计划我们已经不再采用了,我对此表示遗憾,但我是一个实用主义者而不是一个理想主义者。”
所以为了消除延迟,客户端需要比之前运行更多的代码。它现在不再是一个向服务器发送输入内容并在状态发回之前进行数据插入的哑终端,它现在能够在本地预测你的角色移动,并且对你的输入迅速做出反应,在客户端设备上为你的游戏角色运行一部分游戏代码。
现在只要你向前点击,不需要再等待客户端与服务器之间的信息往返——你的角色立即开始向前移动。
这种方法的难点不在于预测,因为预测就像是普通游戏代码做的那样——根据玩家的操作随时间发展游戏角色状态。难点就在于,当客户端和服务器对于游戏角色所处的位置及所做的事情有分歧时,客户端如何以服务器传来的信息为基础进行修正。
对于这一点,你可能会想,嘿,如果你在客户端运行游戏代码——为什么不以客户端的情况作为游戏角色的标准呢。客户端可以为自己的角色运行仿真代码,并在每次发送数据包时告诉服务器现在的情况。那么问题就是,如果每个玩家都可以简单地告诉服务器“这就是我现在的情况”,那就非常容易黑进客户端进行作弊,例如作弊者可以瞬间躲开将要射向他们的子弹,或者立即传送到你身后从后方射击你。
所以在FPS游戏中,尽管每个玩家在本地预测自己角色的运动,从表面上隐藏了延迟,但以服务器状态作为每个玩家角色状态的标准是绝对有必要的。就像Tim Sweeney在UE网络架构里写到的:“服务器才是大哥!”
这就是有趣的地方。如果客户端和服务器信息不一致,客户端就必须接受来自服务器的位置更新,但是由于客户端与服务器之间的延迟,这个对过去修正是必然的。举个例子,如果从客户端到服务器要消耗100ms,再经过100ms回来,那么任何服务器对于玩家角色位置的修正就会有200ms的延迟,这个时间是相对于客户端开始预测自己移动的时间。
如果客户端连续接收服务器的修正更新,这就会及时拉回客户端,这就会导致客户端完全不能做任何客户端预测。怎么解决这个问题的同时仍然允许客户端进行超前预测呢?
解决方法就是在客户端为过去的角色状态以及本地玩家的输入创建一个循环缓冲区,然后当客户端收到一个来自服务器的修正,(首先它丢弃比服务器的修正状态更早的缓冲状态)依据玩家储存在循环缓冲区的输入对由上一次的正确状态开始到现在预测时间的状态进行重放。实际上客户端在等待接下来的情况匹配完成之前悄悄地“倒放与重放”当地的玩家角色移动的最后几帧。
这个方法可以让玩家看似无延迟地控制他们的角色,并且如果客户端与服务器的角色模拟代码一致的话——由于在客户端与服务器上相同的输入可以准确给出相同的结果——这就很少出现要修正的情况。这就像是Tim Sweeney描述的那样:
“……最好的两个世界:在所有情况下,服务器都是绝对权威。在几乎任何时间内,客户端的移动模拟都与服务器计算出的客户端移动完全相同,所以客户端的情况很少需要修正。只有在极少的情况,例如玩家被一枚火箭击中,或是撞上一名敌人,客户端的情况将被修正。”
换句话说,只有当玩家的角色被一些外部事情影响到了玩家的输入,并且这些不能被客户端所预测时,玩家的情况需要被修正。当然,如果玩家试图作弊时亦然。
【版权声明】
原文作者未做权利声明,视为共享知识产权进入公共领域,自动获得授权。
Hi, I’m Glenn Fiedler and welcome to Networking for Game Programmers.
In the previous article, we added our own concept of virtual connection on top of UDP. In this article we’re going to add reliability, ordering and congestion avoidance to our virtual UDP connection.
Those of you familiar with TCP know that it already has its own concept of connection, reliability-ordering and congestion avoidance, so why are we rewriting our own mini version of TCP on top of UDP?
The issue is that multiplayer action games rely on a steady stream of packets sent at rates of 10 to 30 packets per second, and for the most part, the data contained is these packets is so time sensitive that only the most recent data is useful. This includes data such as player inputs, the position, orientation and velocity of each player character, and the state of physics objects in the world.
The problem with TCP is that it abstracts data delivery as a reliable ordered stream. Because of this, if a packet is lost, TCP has to stop and wait for that packet to be resent. This interrupts the steady stream of packets because more recent packets must wait in a queue until the resent packet arrives, so packets are received in the same order they were sent.
What we need is a different type of reliability. Instead of having all data treated as a reliable ordered stream, we want to send packets at a steady rate and get notified when packets are received by the other computer. This allows time sensitive data to get through without waiting for resent packets, while letting us make our own decision about how to handle packet loss at the application level.
It is not possible to implement a reliability system with these properties using TCP, so we have no choice but to roll our own reliability on top of UDP.
The goal of our reliability system is simple: we want to know which packets arrive at the other side of the connection.
First we need a way to identify packets.
What if we had added the concept of a “packet id”? Let’s make it an integer value. We could start this at zero then with each packet we send, increase the number by one. The first packet we send would be packet 0, and the 100th packet sent is packet 99.
This is actually quite a common technique. It’s even used in TCP! These packet ids are called sequence numbers. While we’re not going to implement reliability exactly as TCP does, it makes sense to use the same terminology, so we’ll call them sequence numbers from now on.
Since UDP does not guarantee the order of packets, the 100th packet received is not necessarily the 100th packet sent. It follows that we need to insert the sequence number somewhere in the packet, so that the computer at the other side of the connection knows which packet it is.
We already have a simple packet header for the virtual connection from the previous article, so we’ll just add the sequence number in the header like this:
[uint protocol id]
[uint sequence]
(packet data…)
Now when the other computer receives a packet it knows its sequence number according to the computer that sent it.
Now that we can identify packets using sequence numbers, the next step is to let the other side of the connection know which packets we receive.
Logically this is quite simple, we just need to take note of the sequence number of each packet we receive, and send those sequence numbers back to the computer that sent them.
Because we are sending packets continuously between both machines, we can just add the ack to the packet header, just like we did with the sequence number:
[uint protocol id]
[uint sequence]
[uint ack]
(packet data…)
Our general approach is as follows:
Each time we send a packet we increase the local sequence number
When we receieve a packet, we check the sequence number of the packet against the sequence number of the most recently received packet, called the remote sequence number. If the packet is more recent, we update the remote sequence to be equal to the sequence number of the packet.
When we compose packet headers, the local sequence becomes the sequence number of the packet, and the remote sequence becomes the ack.
This simple ack system works provided that one packet comes in for each packet we send out.
But what if packets clump up such that two packets arrive before we send a packet? We only have space for one ack per-packet, so what do we do?
Now consider the case where one side of the connection is sending packets at a faster rate. If the client sends 30 packets per-second, and the server only sends 10 packets per-second, we need at least 3 acks included in each packet sent from the server.
Let’s make it even more complex! What if the packet containing the ack is lost? The computer that sent the packet would think the packet got lost but it was actually received!
It seems like we need to make our reliability system… more reliable!
Here is where we diverge significantly from TCP.
What TCP does is maintain a sliding window where the ack sent is the sequence number of the next packet it expects to receive, in order. If TCP does not receive an ack for a given packet, it stops and resends a packet with that sequence number again. This is exactly the behavior we want to avoid!
In our reliability system, we never resend a packet with a given sequence number. We sequence n exactly once, then we send n+1, n+2 and so on. We never stop and resend packet n if it was lost, we leave it up to the application to compose a new packet containing the data that was lost, if necessary, and this packet gets sent with a new sequence number.
Because we’re doing things differently to TCP, its now possible to have holes in the set of packets we ack, so it is no longer sufficient to just state the sequence number of the most recent packet we have received.
We need to include multiple acks per-packet.
How many acks do we need?
As mentioned previously we have the case where one side of the connection sends packets faster than the other. Let’s assume that the worst case is one side sending no less than 10 packets per-second, while the other sends no more than 30. In this case, the average number of acks we’ll need per-packet is 3, but if packets clump up a bit, we would need more. Let’s say 6-10 worst case.
What about acks that don’t get through because the packet containing the ack is lost?
To solve this, we’re going to use a classic networking strategy of using redundancy to defeat packet loss!
Let’s include 33 acks per-packet, and this isn’t just going to be up to 33, but always 33. So for any given ack we redundantly send it up to 32 additional times, just in case one packet with the ack doesn’t get through!
But how can we possibly fit 33 acks in a packet? At 4 bytes per-ack thats 132 bytes!
The trick is to represent the 32 previous acks before “ack” using a bitfield:
[uint protocol id]
[uint sequence]
[uint ack]
[uint ack bitfield]
<em>(packet data…)</em>
We define “ack bitfield” such that each bit corresponds to acks of the 32 sequence numbers before “ack”. So let’s say “ack” is 100. If the first bit of “ack bitfield” is set, then the packet also includes an ack for packet 99. If the second bit is set, then packet 98 is acked. This goes all the way down to the 32nd bit for packet 68.
Our adjusted algorithm looks like this:
Each time we send a packet we increase the local sequence number
When we receive a packet, we check the sequence number of the packet against the remote sequence number. If the packet sequence is more recent, we update the remote sequence number.
When we compose packet headers, the local sequence becomes the sequence number of the packet, and the remote sequence becomes the ack. The ack bitfield is calculated by looking into a queue of up to 33 packets, containing sequence numbers in the range [remote sequence - 32, remote sequence]. We set bit n (in [1,32]) in ack bits to 1 if the sequence number remote sequence - n is in the received queue.
Additionally, when a packet is received, ack bitfield is scanned and if bit n is set, then we acknowledge sequence number packet sequence - n, if it has not been acked already.
With this improved algorithm, you would have to lose 100% of packets for more than a second to stop an ack getting through. And of course, it easily handles different send rates and clumped up packet receives.
Now that we know what packets are received by the other side of the connection, how do we detect packet loss?
The trick here is to flip it around and say that if you don’t get an ack for a packet within a certain amount of time, then we consider that packet lost.
Given that we are sending at no more than 30 packets per second, and we are redundantly sending acks roughly 30 times, if you don’t get an ack for a packet within one second, it is very likely that packet was lost.
So we are playing a bit of a trick here, while we can know 100% for sure which packets get through, but we can only be reasonably certain of the set of packets that didn’t arrive.
The implication of this is that any data which you resend using this reliability technique needs to have its own message id so that if you receive it multiple times, you can discard it. This can be done at the application level.
No discussion of sequence numbers and acks would be complete without coverage of sequence number wrap around!
Sequence numbers and acks are 32 bit unsigned integers, so they can represent numbers in the range [0,4294967295]. Thats a very high number! So high that if you sent 30 packets per-second, it would take over four and a half years for the sequence number to wrap back around to zero.
But perhaps you want to save some bandwidth so you shorten your sequence numbers and acks to 16 bit integers. You save 4 bytes per-packet, but now they wrap around in only half an hour.
So how do we handle this wrap around case?
The trick is to realize that if the current sequence number is already very high, and the next sequence number that comes in is very low, then you must have wrapped around. So even though the new sequence number is numerically lower than the current sequence value, it actually represents a more recent packet.
For example, let’s say we encoded sequence numbers in one byte (not recommended btw. :)), then they would wrap around after 255 like this:
… 252, 253, 254, 255, 0, 1, 2, 3, …
To handle this case we need a new function that is aware of the fact that sequence numbers wrap around to zero after 255, so that 0, 1, 2, 3 are considered more recent than 255. Otherwise, our reliability system stops working after you receive packet 255.
Here’s a function for 16 bit sequence numbers:
inline bool sequence_greater_than( uint16_t s1, uint16_t s2 )
{
return ( ( s1 > s2 ) && ( s1 - s2 <= 32768 ) ) ||
( ( s1 < s2 ) && ( s2 - s1 > 32768 ) );
}
This function works by comparing the two numbers and their difference. If their difference is less than 1⁄2 the maximum sequence number value, then they must be close together - so we just check if one is greater than the other, as usual. However, if they are far apart, their difference will be greater than 1⁄2 the max sequence, then we paradoxically consider the sequence number more recent if it is less than the current sequence number.
This last bit is what handles the wrap around of sequence numbers transparently, so 0,1,2 are considered more recent than 255.
Make sure you include this in any sequence number processing you do.
While we have solved reliability, there is still the question of congestion avoidance. TCP provides congestion avoidance as part of the packet when you get TCP reliability, but UDP has no congestion avoidance whatsoever!
If we just send packets without some sort of flow control, we risk flooding the connection and inducing severe latency (2 seconds plus!) as routers between us and the other computer become congested and buffer up packets. This happens because routers try very hard to deliver all the packets we send, and therefore tend to buffer up packets in a queue before they consider dropping them.
While it would be nice if we could tell the routers that our packets are time sensitive and should be dropped instead of buffered if the router is overloaded, we can’t really do this without rewriting the software for all routers in the world.
Instead, we need to focus on what we can actually do which is to avoid flooding the connection in the first place. We try to avoid sending too much bandwidth in the first place, and then if we detect congestion, we attempt to back off and send even less.
The way to do this is to implement our own basic congestion avoidance algorithm. And I stress basic! Just like reliability, we have no hope of coming up with something as general and robust as TCP’s implementation on the first try, so let’s keep it as simple as possible.
Since the whole point of congestion avoidance is to avoid flooding the connection and increasing round trip time (RTT), it makes sense that the most important metric as to whether or not we are flooding our connection is the RTT itself.
We need a way to measure the RTT of our connection.
Here is the basic technique:
For each packet we send, we add an entry to a queue containing the sequence number of the packet and the time it was sent.
Each time we receive an ack, we look up this entry and note the difference in local time between the time we receive the ack, and the time we sent the packet. This is the RTT time for that packet.
Because the arrival of packets varies with network jitter, we need to smooth this value to provide something meaningful, so each time we obtain a new RTT we move a percentage of the distance between our current RTT and the packet RTT. 10% seems to work well for me in practice. This is called an exponentially smoothed moving average, and it has the effect of smoothing out noise in the RTT with a low pass filter.
To ensure that the sent queue doesn’t grow forever, we discard packets once they have exceeded some maximum expected RTT. As discussed in the previous section on reliability, it is exceptionally likely that any packet not acked within a second was lost, so one second is a good value for this maximum RTT.
Now that we have RTT, we can use it as a metric to drive our congestion avoidance. If RTT gets too large, we send data less frequently, if its within acceptable ranges, we can try sending data more frequently.
As discussed before, let’s not get greedy, we’ll implement a very basic congestion avoidance. This congestion avoidance has two modes. Good and bad. I call it simple binary congestion avoidance.
Let’s assume you send packets of a certain size, say 256 bytes. You would like to send these packets 30 times a second, but if conditions are bad, you can drop down to 10 times a second.
So 256 byte packets 30 times a second is around 64kbits/sec, and 10 times a second is roughly 20kbit/sec. There isn’t a broadband network connection in the world that can’t handle at least 20kbit/sec, so we’ll move forward with this assumption. Unlike TCP which is entirely general for any device with any amount of send/recv bandwidth, we’re going to assume a minimum supported bandwidth for devices involved in our connections.
So the basic idea is this. When network conditions are “good” we send 30 packets per-second, and when network conditions are “bad” we drop to 10 packets per-second.
Of course, you can define “good” and “bad” however you like, but I’ve gotten good results considering only RTT. For example if RTT exceeds some threshold (say 250ms) then you know you are probably flooding the connection. Of course, this assumes that nobody would normally exceed 250ms under non-flooding conditions, which is reasonable given our broadband requirement.
How do you switch between good and bad? The algorithm I like to use operates as follows:
If you are currently in good mode, and conditions become bad, immediately drop to bad mode
If you are in bad mode, and conditions have been good for a specific length of time ’t’, then return to good mode
To avoid rapid toggling between good and bad mode, if you drop from good mode to bad in under 10 seconds, double the amount of time ’t’ before bad mode goes back to good. Clamp this at some maximum, say 60 seconds.
To avoid punishing good connections when they have short periods of bad behavior, for each 10 seconds the connection is in good mode, halve the time ’t’ before bad mode goes back to good. Clamp this at some minimum like 1 second.
With this algorithm you will rapidly respond to bad conditions and drop your send rate to 10 packets per-second, avoiding flooding of the connection. You’ll also conservatively try out good mode, and persist sending packets at a higher rate of 30 packets per-second, while network conditions are good.
Of course, you can implement much more sophisticated algorithms. Packet loss % can be taken into account as a metric, even the amount of network jitter (time variance in packet acks), not just RTT.
You can also get much more greedy with congestion avoidance, and attempt to discover when you can send data at a much higher bandwidth (eg. LAN), but you have to be very careful! With increased greediness comes more risk that you’ll flood the connection.
Our new reliability system let’s us send a steady stream of packets and notifies us which packets are received. From this we can infer lost packets, and resend data that didn’t get through if necessary. We also have a simple congestion avoidance system that drops from 30 packets per-second to 10 times a second so we don’t flood the connection.
翻译:艾涛(轻描一个世界) 审校:黄威(横写丶意气风发)
简介
嗨,我是格伦-菲德勒,欢迎来到我的游戏程序员网络设计文章系列的第四篇。
在之前的文章里,我们将我们的虚拟连接的概念加入到UDP之上。
现在我们将要给我们的虚拟UDP连接增加可靠性,排序和避免拥堵。
这是迄今为止底层游戏网络设计中最复杂的一面,因此这将是一篇极其热情的文章,跟上我启程出发!
TCP的问题
熟悉TCP的你们知道它已经有了自己关于连接、可靠性、排序和避免拥堵的概念,那么为什么我们还要重写我们自己的迷你版本的基于UDP的TCP呢?
问题是多人动作游戏依靠于一个稳定的每秒发送10到30包的数据包流,而且在大多数情况下,这些数据包中包含的数据对时间是如此敏感以至于只有最新的数据才是有用的。这包括玩家的输入,位置方向和每个玩家角色的速度以及游戏世界中物理对象的状态等数据。
TCP的问题是它提取的是以可靠有序的数据流发送的数据。正因为如此,如果一个数据包丢失了,TCP不得不停止以等待那个数据包重新发送,这打断了这个稳定的数据包流因为更多的最新的数据包在重新发送的数据包到达之前必须在队列中等待,所以数据包必须有序地提供。
我们需要的是一种不同类型的可靠性。我们想要以一个稳定的速度发送数据包而且当数据被其他电脑接收到时我们会得到通知,而不是让所有的数据用一个可靠有效的数据流处理。这样的方法使得那些对时间敏感的数据能够不用等待重新发送的数据包就通过,而让我们自己拿主意怎么在应用层级去处理丢包。
具有TCP这些特性的系统是不可能实现可靠性的,因此我们别无选择只能在UDP的基础上自行努力。
不幸的是,可靠性并不是唯一一个我们必须重写的东西,这是因为TCP也提供避免拥堵功能,这样它就能够动态地衡量数据发送速率以来适应网络连接的性能。例如TCP在28.8k的调制调解器上会比在T1线路上发送更少的数据,而且它在不用事先知道这是什么类型的网络连接的情况下就能这么做!
序列号
现在回到可靠性!
我们可靠性系统的目标很简单:我们想要知道哪些数据包到了网络连接的另一端。
首先我们得鉴别数据包。
如果我们添加一个“数据包id”的概念会怎么样?让我们先给id赋一个整数值。我们能够从零开始,然后随着我们每发送一个数据包,增加一个数值。我们发送的第一个数据包就是“包0”,发送的第100个数据包就是“包99”。
这实际上是一个相当普遍的技术。甚至于在TCP中也得到了应用!这些数据包id叫做序列号,然而我们并不打算像TCP那样去做来实现可靠性,使用相同的术语是有意义的,因此从现在起我们还将称之为序列号。
因为UDP并不能保证数据包的顺序,所以第100个收到的数据包并不一定是第100个发出的数据包。接下来我们需要在数据包中插入序列号这样网络连接另一端电脑便能够知道是哪个数据包。
我们在前一篇文章中已经有了一个简单的关于虚拟网络连接的数据头,因此我们将只需要像这样在数据头中插入序列号:
[uint protocol id]
[uint sequence]
(packet data…)
现在当其他电脑收到一个数据包时通过发送数据包的电脑它就能知道数据包的序列号啦。
应答系统
既然我们已经能够使用序列号来鉴别数据包,下一步就该是让网络连接的另一端知道我们收到了哪个包了。
逻辑上来说这是非常简单的,我们只需要记录我们收到的每个包的序列号,然后把那些序列号发回发送他们的电脑即可。
因为我们是在两个机器间相互发送数据包,我们只能在数据包头添加上确认字符,就像我们加上序列号一样:
[uint protocol id]
[uint sequence]
[uint ack]
(packet data…)
我们的一般方法如下:
这个简单的应答系统工作条件是每当我们发出一个数据包就会接收到一个数据包。
但如果数据包一起发送这样在我们发送一个数据包之前有两个数据包到达该怎么办呢?我们每个数据包只留了一个确认字符的位置,那我们该怎么处理呢?
现在考虑网络连接中的一端用更快的速率发送数据包这种情况。如果客户端每秒发送30个数据包,而服务器每秒只发送10个数据包,这样从服务器发出的每个数据包我们至少需要3个确认字符。
让我们想得更复杂点!如果数据包留下来了而确认字符丢失了会怎么样?这样发送这个数据包的电脑会认为这个数据包已经丢失了而实际上它已经被收到了!
貌似我们需要让我们的可靠性系统……更加可靠一点!
可靠的应答系统
这就是我们偏离TCP的地方。
TCP的做法是在确认字符发送的地方给下一个按顺序预期该收到的数据包序列号的位置维持一个移动窗口。如果TCP对于一个已经发出的数据包没有收到确认字符,它将暂停并重新发送那个对应序列号的数据包。这正是我们想要避免的做法!
因此在我们的可靠性系统里,我们从不为一个已经发出的序列号重新发送数据包,我们精确地只排序一次n,然后我们发送n+1,n+2,依次类推。如果数据包n丢失了我们也从不暂停重新发送它,而是把它留给应用程序来编写一个包含丢失数据的新的数据包,必要的话,这个包还会用一个新的序列号发送。
因为我们工作的方式与TCP不同,它的做法现在可能在我们数据包的确定字符设置中有了个洞,因此现在仅仅陈述最近的数据包的序列号已经远远不够了。
我们需要在每个数据包中包含多个确认字符。
那我们需要多少确认字符呢?
正如之前提到网络连接的一端发包速率比另一端快的情况,让我们假定最糟的情况是一端每秒钟发送不少于10个数据包,而另一端每秒钟发送不多于30个数据包。这种情况下,我们每个数据包需要的平均确认字符数是3个,但是如果数据包发送密集点,我们将需要更多。让我们说6-10个最差的情况。
如果因为包含确认字符的数据包丢失而导致确认字符并没有到达怎么办?
为了解决这个问题,我们将要使用一种经典的使用冗余码的网络设计策略来处理数据包丢失的情况!
让我们在每个数据包中容纳33个确认字符,而且这不仅是他将要达到33个,而是一直是33个。因此对于每一个发出的确认字符我们多余地把它额外多发送了多达32次,仅仅是以防某个包含确认字符的数据包不能通过!
但是我们怎么可能在一个数据包里配置33个确认字符呢?每个确认字符4字节那就是132字节了!
窍门是在“相应确认字符”之前使用一段位域来代表32个之前的确认字符,就像这样:
[uint protocol id]
[uint sequence]
[uint ack]
[uint ack bitfield]
(packet data…)
我们这样规定“位域”中每一位对应“相应确认字符”之前的32个确认字符。因此让我们说“相应确认字符”是100。如果位域的第一位设置好了,那么这个数据包也包含包99的一个确认字符。如果第二位设置好了,那么它也包含包98的一个确认字符。这样一路下来就到了包68的第32位。
我们调整过的算法看起来就像这样:
利用这个改善过的算法,你将可能不得不在不止一秒内丢掉100%的数据包而不是让一个数据包停止通过。当然,它能够轻松地处理不同的发包速率和接受一起发送的数据包。
检测丢包
既然我们知道网络连接另一端接受的是哪些数据包,那么我们该怎么检测数据包的丢失呢?
这次的窍门是反过来想,如果你在一定时间内还没有收到某个数据包的应答,那么我们可以考虑说那个数据包已经丢失了。
考虑到我们正在以每秒不超过30包的速率发送数据包,而且我们正在多余地发送数据包大概三十次。如果你在一秒内没有收到某个数据包的确认字符,那很有可能就是这个数据包已经丢失了。
因此我们在这儿用了一些小窍门,尽管我们能100%确定哪个数据包通过了,但是我们只能适度地确定那些没有到达的数据包。
这种情况的复杂性在于任何你重新发送的使用了这种可靠性方法的数据需要有它自己的信息id,这样的话在你多次收到它的时候你可以放弃它。这在应用层级是能够做到的。
应对环绕式处理的序列号
如果序列号没有环绕式处理覆盖,那么对于序列号和确认字符的讨论是不完整的!
序列号和确认字符都是32比特的无符号整数,因此它们能够代表在范围[0,4294967295]内的数字。那是一个非常大的数字!那么大以至于如果你每秒发送三十个数据包也将要花费四年半来把这个序列号环绕式处理回零。
但是可能你想要节省一些带宽这样你将你的序列号和确认字符缩减到到16比特整数。你每个数据包节省了4个字节,但现在他们只需要在仅仅半个小时内即可完成环绕式处理!
所以我们该怎么应对这种环绕式处理的情况呢?
诀窍是要认识到如果当前序列号已经非常高了,而且下一个到达的序列号很低,那么你就必须进行环绕式处理。那么即使新的序列号数值上比当前序列号值更低它也能实际代表一个更新的数据包。
举个例子,让我们假设我们用一个字节编码序列号(顺便说一下并不推荐这样做)。 :)), 之后他们就会在255后面进行环绕式处理,就像这样:
… 252, 253, 254, 255, 0, 1, 2, 3,…
为了解决这种情况我们需要一个能够意识到在255之后需要环绕式处理回零这样一个事实的新功能,这样0,1,2,3就会被认为比255更新。否则,我们的可靠性系统就会在你收到包255后停止工作。
这就是那个新功能:
boolsequence_more_recent( unsigned int s1,
unsigned int s2,
unsigned int max )
{
return
( s1 > s2 ) &&
( s1 - s2 <= max/2 )
||
( s2 > s1 ) &&
( s2 - s1 > max/2 );
}
这个功能通过比较两个数字和他们的不同来工作。如果它们之间的差异少于1/2的最大序列号值,那么它们必须靠在一起– 因此我们只需要照常检查某个序列号是否比另一个大。然而,如果它们相差很多,它们之间的差异将会比1/2的最大序列号值大,那么如果它比当前序列号小我们反而认为这个序列号是更新的。
这最后一点是显然需要环绕式处理序列号的地方,那么0,1,2就会被认为比255更新。
多么简洁而巧妙!
一定要确保你在你所做的任何序列号处理当中包含了这一步!
避免拥堵
当你已经解决了可靠性的问题的时候,还有避免拥堵的问题。当你获得TCP的可靠性的时候TCP已经提供了避免拥堵的功能作为数据包的一部分,但是UDP无论怎样都不会有避免拥堵!
如果我们仅仅发送数据包而没有某种流量控制,我们正在冒险占满网络连接而且会引起严重的延迟(2秒以上!),正如我们和另外一台电脑之间的路由器会超负荷而缓冲数据包。这个发生是因为路由器很努力地想要尝试传送我们发送的所有数据包,因此在它们考虑丢弃数据包之前会在队列中缓冲数据包。
然而如果我们能告诉路由器我们的数据包是时间敏感的而且如果路由器超载的话这些数据包应该丢弃而不是缓冲这样会很棒的,但只有我们重写世界上所有路由器的软件才能做到这一点!
那么我们反而需要把重点放在我们实际上能做的是避免占满首位网络连接。
做到这个的方法是实施我们自己的基础避免拥堵算法。我强调基础!就像可靠性,我们并不寄希望于像TCP第一次尝试应用那样普通而粗暴地想出某些东西,那么让我们让它尽可能简单吧。
衡量往返时间
因为所有避免拥堵的要点就是避免占满网络连接和避免增加往返时间(RTT),关于我们是不是占满网络的最重要的衡量标准是RTT它本身的观点是有道理的。
我们需要一种方法来衡量我们网络连接的RTT。
这是基础的技巧:
既然我们有RTT,我们能把它作为一个衡量标准来推动我们的避免拥堵功能。如果RTT变得太大了,我们更缓慢地发送数据,如果它的值低于可接受范围,我们能努力更频繁地发送数据。
简单的好坏机制避免拥堵
正如之前讨论的,我们不要那么贪心,我们将要执行一个非常基础的避免拥堵机制。这个避免拥堵机制有两种模式。好和坏。我把它叫做简单的二进制避免拥堵。
让我们假设你在发送一个确定大小的数据包,就假设256字节吧。你想要每秒发送这些数据包30次,但是如果网络条件差,你可以削减为每秒10次。
那么30次256字节的数据包的速率大概是64kbits/sec,每秒10次的话大概20kbits/sec。世界上没有一个宽带连接不能处理至少20kbits/sec的速率,所以我们在这样的假定下继续前进。不像TCP这样对有任何数量的发送/接受带宽的任何设备都完全通用,我们将假设一个设备的最小支持带宽来参与我们的网络连接。
所以基础想法就是这样了。当网络条件好的时候我们每秒发送30个数据包,当网络条件差的时候我们降至每秒10个数据包。
当然,你能随你喜爱定义好和坏,但是仅考虑RTT的时候我已经得到了好的成效。举个例子,如果RTT超过某些极限值(假设250ms)那你就知道你可能已经正占满了网络连接。当然,这里假设一般没人在非占满网络条件下超过250ms,考虑到我们的宽带要求这是合理的。。
好和坏之间你会怎么转换?我喜欢用下列操作的算法:
利用这个算法,你将对差网络连接迅速反应然后降低你的发送速率至每秒10个数据包,避免占满网络。在网络条件好时,你也将谨慎地尝试好模式,坚持以更高的每秒发送30个数据包的速率发送数据包。
当然,你也能实施复杂得多的算法,丢包率百分比甚至是网络波动(数据包确认字符的时间差异)都可以考虑作为一个衡量标准,而不仅仅是RTT。
对于避免拥堵你还可以更贪心点,并尝试发现什么时候你能以一个更高的带宽(例如LAN)发送数据,但是你必须非常小心!随着贪婪心的增加你占满网络连接的风险也在增大!
结语
我们全新的可靠性系统让我们稳定流畅发送数据包,而且能通知我们收到了什么数据包。从这我们能推断出丢失的数据包,必要的话重新发送没有通过的数据。
基于此我们有了能够取决于网络条件在每秒10次和每秒30次发包速率间轮流切换的一个简单的避免拥堵系统,因此我们不会占满网络连接。
还有很多实施细节因为太具体而不能在这篇文章一一提到,所以务必确保你检查示例源代码来看是否它都被实施了。
这就是关于可靠性,排序和避免拥堵的一切了,或许是低层次网络设计中最复杂的一面了。
【版权声明】
原文作者未做权利声明,视为共享知识产权进入公共领域,自动获得授权;
因 Gaffer On Games 的源码原下载地址失效, 所以特地补上.