🚙

💨 💨 💨

×

  • Categories

  • Archives

  • Tags

  • About

游戏网络开发四之基于UDP的可靠性与排序和避免拥堵

Posted on 11-17-2016 | In GS

原文

原文出处

Introduction

Hi, I’m Glenn Fiedler and welcome to Networking for Game Programmers.


In the previous article, we added our own concept of virtual connection on top of UDP. In this article we’re going to add reliability, ordering and congestion avoidance to our virtual UDP connection.


The Problem with TCP


Those of you familiar with TCP know that it already has its own concept of connection, reliability-ordering and congestion avoidance, so why are we rewriting our own mini version of TCP on top of UDP?


The issue is that multiplayer action games rely on a steady stream of packets sent at rates of 10 to 30 packets per second, and for the most part, the data contained is these packets is so time sensitive that only the most recent data is useful. This includes data such as player inputs, the position, orientation and velocity of each player character, and the state of physics objects in the world.


The problem with TCP is that it abstracts data delivery as a reliable ordered stream. Because of this, if a packet is lost, TCP has to stop and wait for that packet to be resent. This interrupts the steady stream of packets because more recent packets must wait in a queue until the resent packet arrives, so packets are received in the same order they were sent.


What we need is a different type of reliability. Instead of having all data treated as a reliable ordered stream, we want to send packets at a steady rate and get notified when packets are received by the other computer. This allows time sensitive data to get through without waiting for resent packets, while letting us make our own decision about how to handle packet loss at the application level.


It is not possible to implement a reliability system with these properties using TCP, so we have no choice but to roll our own reliability on top of UDP.


Sequence Numbers


The goal of our reliability system is simple: we want to know which packets arrive at the other side of the connection.


First we need a way to identify packets.


What if we had added the concept of a “packet id”? Let’s make it an integer value. We could start this at zero then with each packet we send, increase the number by one. The first packet we send would be packet 0, and the 100th packet sent is packet 99.


This is actually quite a common technique. It’s even used in TCP! These packet ids are called sequence numbers. While we’re not going to implement reliability exactly as TCP does, it makes sense to use the same terminology, so we’ll call them sequence numbers from now on.


Since UDP does not guarantee the order of packets, the 100th packet received is not necessarily the 100th packet sent. It follows that we need to insert the sequence number somewhere in the packet, so that the computer at the other side of the connection knows which packet it is.


We already have a simple packet header for the virtual connection from the previous article, so we’ll just add the sequence number in the header like this:


   [uint protocol id]
[uint sequence]
(packet data…)

Now when the other computer receives a packet it knows its sequence number according to the computer that sent it.


Acks


Now that we can identify packets using sequence numbers, the next step is to let the other side of the connection know which packets we receive.


Logically this is quite simple, we just need to take note of the sequence number of each packet we receive, and send those sequence numbers back to the computer that sent them.


Because we are sending packets continuously between both machines, we can just add the ack to the packet header, just like we did with the sequence number:


    [uint protocol id]
[uint sequence]
[uint ack]
(packet data…)

Our general approach is as follows:



  • Each time we send a packet we increase the local sequence number


  • When we receieve a packet, we check the sequence number of the packet against the sequence number of the most recently received packet, called the remote sequence number. If the packet is more recent, we update the remote sequence to be equal to the sequence number of the packet.


  • When we compose packet headers, the local sequence becomes the sequence number of the packet, and the remote sequence becomes the ack.



This simple ack system works provided that one packet comes in for each packet we send out.


But what if packets clump up such that two packets arrive before we send a packet? We only have space for one ack per-packet, so what do we do?


Now consider the case where one side of the connection is sending packets at a faster rate. If the client sends 30 packets per-second, and the server only sends 10 packets per-second, we need at least 3 acks included in each packet sent from the server.


Let’s make it even more complex! What if the packet containing the ack is lost? The computer that sent the packet would think the packet got lost but it was actually received!


It seems like we need to make our reliability system… more reliable!


Reliable Acks


Here is where we diverge significantly from TCP.


What TCP does is maintain a sliding window where the ack sent is the sequence number of the next packet it expects to receive, in order. If TCP does not receive an ack for a given packet, it stops and resends a packet with that sequence number again. This is exactly the behavior we want to avoid!


In our reliability system, we never resend a packet with a given sequence number. We sequence n exactly once, then we send n+1, n+2 and so on. We never stop and resend packet n if it was lost, we leave it up to the application to compose a new packet containing the data that was lost, if necessary, and this packet gets sent with a new sequence number.


Because we’re doing things differently to TCP, its now possible to have holes in the set of packets we ack, so it is no longer sufficient to just state the sequence number of the most recent packet we have received.


We need to include multiple acks per-packet.


How many acks do we need?


As mentioned previously we have the case where one side of the connection sends packets faster than the other. Let’s assume that the worst case is one side sending no less than 10 packets per-second, while the other sends no more than 30. In this case, the average number of acks we’ll need per-packet is 3, but if packets clump up a bit, we would need more. Let’s say 6-10 worst case.


What about acks that don’t get through because the packet containing the ack is lost?


To solve this, we’re going to use a classic networking strategy of using redundancy to defeat packet loss!


Let’s include 33 acks per-packet, and this isn’t just going to be up to 33, but always 33. So for any given ack we redundantly send it up to 32 additional times, just in case one packet with the ack doesn’t get through!


But how can we possibly fit 33 acks in a packet? At 4 bytes per-ack thats 132 bytes!


The trick is to represent the 32 previous acks before “ack” using a bitfield:


    [uint protocol id]
[uint sequence]
[uint ack]
[uint ack bitfield]
<em>(packet data…)</em>

We define “ack bitfield” such that each bit corresponds to acks of the 32 sequence numbers before “ack”. So let’s say “ack” is 100. If the first bit of “ack bitfield” is set, then the packet also includes an ack for packet 99. If the second bit is set, then packet 98 is acked. This goes all the way down to the 32nd bit for packet 68.


Our adjusted algorithm looks like this:



  • Each time we send a packet we increase the local sequence number


  • When we receive a packet, we check the sequence number of the packet against the remote sequence number. If the packet sequence is more recent, we update the remote sequence number.


  • When we compose packet headers, the local sequence becomes the sequence number of the packet, and the remote sequence becomes the ack. The ack bitfield is calculated by looking into a queue of up to 33 packets, containing sequence numbers in the range [remote sequence - 32, remote sequence]. We set bit n (in [1,32]) in ack bits to 1 if the sequence number remote sequence - n is in the received queue.


  • Additionally, when a packet is received, ack bitfield is scanned and if bit n is set, then we acknowledge sequence number packet sequence - n, if it has not been acked already.



With this improved algorithm, you would have to lose 100% of packets for more than a second to stop an ack getting through. And of course, it easily handles different send rates and clumped up packet receives.


Detecting Lost Packets


Now that we know what packets are received by the other side of the connection, how do we detect packet loss?


The trick here is to flip it around and say that if you don’t get an ack for a packet within a certain amount of time, then we consider that packet lost.


Given that we are sending at no more than 30 packets per second, and we are redundantly sending acks roughly 30 times, if you don’t get an ack for a packet within one second, it is very likely that packet was lost.


So we are playing a bit of a trick here, while we can know 100% for sure which packets get through, but we can only be reasonably certain of the set of packets that didn’t arrive.


The implication of this is that any data which you resend using this reliability technique needs to have its own message id so that if you receive it multiple times, you can discard it. This can be done at the application level.


Handling Sequence Number Wrap-Around


No discussion of sequence numbers and acks would be complete without coverage of sequence number wrap around!


Sequence numbers and acks are 32 bit unsigned integers, so they can represent numbers in the range [0,4294967295]. Thats a very high number! So high that if you sent 30 packets per-second, it would take over four and a half years for the sequence number to wrap back around to zero.


But perhaps you want to save some bandwidth so you shorten your sequence numbers and acks to 16 bit integers. You save 4 bytes per-packet, but now they wrap around in only half an hour.


So how do we handle this wrap around case?


The trick is to realize that if the current sequence number is already very high, and the next sequence number that comes in is very low, then you must have wrapped around. So even though the new sequence number is numerically lower than the current sequence value, it actually represents a more recent packet.


For example, let’s say we encoded sequence numbers in one byte (not recommended btw. :)), then they would wrap around after 255 like this:


    … 252, 253, 254, 255, 0, 1, 2, 3, …

To handle this case we need a new function that is aware of the fact that sequence numbers wrap around to zero after 255, so that 0, 1, 2, 3 are considered more recent than 255. Otherwise, our reliability system stops working after you receive packet 255.


Here’s a function for 16 bit sequence numbers:


    inline bool sequence_greater_than( uint16_t s1, uint16_t s2 )
{
return ( ( s1 > s2 ) && ( s1 - s2 <= 32768 ) ) ||
( ( s1 < s2 ) && ( s2 - s1 > 32768 ) );
}

This function works by comparing the two numbers and their difference. If their difference is less than 1⁄2 the maximum sequence number value, then they must be close together - so we just check if one is greater than the other, as usual. However, if they are far apart, their difference will be greater than 1⁄2 the max sequence, then we paradoxically consider the sequence number more recent if it is less than the current sequence number.


This last bit is what handles the wrap around of sequence numbers transparently, so 0,1,2 are considered more recent than 255.


Make sure you include this in any sequence number processing you do.


Congestion Avoidance


While we have solved reliability, there is still the question of congestion avoidance. TCP provides congestion avoidance as part of the packet when you get TCP reliability, but UDP has no congestion avoidance whatsoever!


If we just send packets without some sort of flow control, we risk flooding the connection and inducing severe latency (2 seconds plus!) as routers between us and the other computer become congested and buffer up packets. This happens because routers try very hard to deliver all the packets we send, and therefore tend to buffer up packets in a queue before they consider dropping them.


While it would be nice if we could tell the routers that our packets are time sensitive and should be dropped instead of buffered if the router is overloaded, we can’t really do this without rewriting the software for all routers in the world.


Instead, we need to focus on what we can actually do which is to avoid flooding the connection in the first place. We try to avoid sending too much bandwidth in the first place, and then if we detect congestion, we attempt to back off and send even less.


The way to do this is to implement our own basic congestion avoidance algorithm. And I stress basic! Just like reliability, we have no hope of coming up with something as general and robust as TCP’s implementation on the first try, so let’s keep it as simple as possible.


Measuring Round Trip Time


Since the whole point of congestion avoidance is to avoid flooding the connection and increasing round trip time (RTT), it makes sense that the most important metric as to whether or not we are flooding our connection is the RTT itself.


We need a way to measure the RTT of our connection.


Here is the basic technique:



  • For each packet we send, we add an entry to a queue containing the sequence number of the packet and the time it was sent.


  • Each time we receive an ack, we look up this entry and note the difference in local time between the time we receive the ack, and the time we sent the packet. This is the RTT time for that packet.


  • Because the arrival of packets varies with network jitter, we need to smooth this value to provide something meaningful, so each time we obtain a new RTT we move a percentage of the distance between our current RTT and the packet RTT. 10% seems to work well for me in practice. This is called an exponentially smoothed moving average, and it has the effect of smoothing out noise in the RTT with a low pass filter.


  • To ensure that the sent queue doesn’t grow forever, we discard packets once they have exceeded some maximum expected RTT. As discussed in the previous section on reliability, it is exceptionally likely that any packet not acked within a second was lost, so one second is a good value for this maximum RTT.



Now that we have RTT, we can use it as a metric to drive our congestion avoidance. If RTT gets too large, we send data less frequently, if its within acceptable ranges, we can try sending data more frequently.


Simple Binary Congestion Avoidance


As discussed before, let’s not get greedy, we’ll implement a very basic congestion avoidance. This congestion avoidance has two modes. Good and bad. I call it simple binary congestion avoidance.


Let’s assume you send packets of a certain size, say 256 bytes. You would like to send these packets 30 times a second, but if conditions are bad, you can drop down to 10 times a second.


So 256 byte packets 30 times a second is around 64kbits/sec, and 10 times a second is roughly 20kbit/sec. There isn’t a broadband network connection in the world that can’t handle at least 20kbit/sec, so we’ll move forward with this assumption. Unlike TCP which is entirely general for any device with any amount of send/recv bandwidth, we’re going to assume a minimum supported bandwidth for devices involved in our connections.


So the basic idea is this. When network conditions are “good” we send 30 packets per-second, and when network conditions are “bad” we drop to 10 packets per-second.


Of course, you can define “good” and “bad” however you like, but I’ve gotten good results considering only RTT. For example if RTT exceeds some threshold (say 250ms) then you know you are probably flooding the connection. Of course, this assumes that nobody would normally exceed 250ms under non-flooding conditions, which is reasonable given our broadband requirement.


How do you switch between good and bad? The algorithm I like to use operates as follows:



  • If you are currently in good mode, and conditions become bad, immediately drop to bad mode


  • If you are in bad mode, and conditions have been good for a specific length of time ’t’, then return to good mode


  • To avoid rapid toggling between good and bad mode, if you drop from good mode to bad in under 10 seconds, double the amount of time ’t’ before bad mode goes back to good. Clamp this at some maximum, say 60 seconds.


  • To avoid punishing good connections when they have short periods of bad behavior, for each 10 seconds the connection is in good mode, halve the time ’t’ before bad mode goes back to good. Clamp this at some minimum like 1 second.



With this algorithm you will rapidly respond to bad conditions and drop your send rate to 10 packets per-second, avoiding flooding of the connection. You’ll also conservatively try out good mode, and persist sending packets at a higher rate of 30 packets per-second, while network conditions are good.


Of course, you can implement much more sophisticated algorithms. Packet loss % can be taken into account as a metric, even the amount of network jitter (time variance in packet acks), not just RTT.


You can also get much more greedy with congestion avoidance, and attempt to discover when you can send data at a much higher bandwidth (eg. LAN), but you have to be very careful! With increased greediness comes more risk that you’ll flood the connection.


Conclusion


Our new reliability system let’s us send a steady stream of packets and notifies us which packets are received. From this we can infer lost packets, and resend data that didn’t get through if necessary. We also have a simple congestion avoidance system that drops from 30 packets per-second to 10 times a second so we don’t flood the connection.

译文

译文出处

翻译:艾涛(轻描一个世界) 审校:黄威(横写丶意气风发)


简介

嗨,我是格伦-菲德勒,欢迎来到我的游戏程序员网络设计文章系列的第四篇。

在之前的文章里,我们将我们的虚拟连接的概念加入到UDP之上。

现在我们将要给我们的虚拟UDP连接增加可靠性,排序和避免拥堵。

这是迄今为止底层游戏网络设计中最复杂的一面,因此这将是一篇极其热情的文章,跟上我启程出发!

游戏网络开发(四):基于UDP的可靠性,排序和避免拥堵

TCP的问题

熟悉TCP的你们知道它已经有了自己关于连接、可靠性、排序和避免拥堵的概念,那么为什么我们还要重写我们自己的迷你版本的基于UDP的TCP呢?

问题是多人动作游戏依靠于一个稳定的每秒发送10到30包的数据包流,而且在大多数情况下,这些数据包中包含的数据对时间是如此敏感以至于只有最新的数据才是有用的。这包括玩家的输入,位置方向和每个玩家角色的速度以及游戏世界中物理对象的状态等数据。

TCP的问题是它提取的是以可靠有序的数据流发送的数据。正因为如此,如果一个数据包丢失了,TCP不得不停止以等待那个数据包重新发送,这打断了这个稳定的数据包流因为更多的最新的数据包在重新发送的数据包到达之前必须在队列中等待,所以数据包必须有序地提供。

我们需要的是一种不同类型的可靠性。我们想要以一个稳定的速度发送数据包而且当数据被其他电脑接收到时我们会得到通知,而不是让所有的数据用一个可靠有效的数据流处理。这样的方法使得那些对时间敏感的数据能够不用等待重新发送的数据包就通过,而让我们自己拿主意怎么在应用层级去处理丢包。

具有TCP这些特性的系统是不可能实现可靠性的,因此我们别无选择只能在UDP的基础上自行努力。

不幸的是,可靠性并不是唯一一个我们必须重写的东西,这是因为TCP也提供避免拥堵功能,这样它就能够动态地衡量数据发送速率以来适应网络连接的性能。例如TCP在28.8k的调制调解器上会比在T1线路上发送更少的数据,而且它在不用事先知道这是什么类型的网络连接的情况下就能这么做!

序列号

现在回到可靠性!

我们可靠性系统的目标很简单:我们想要知道哪些数据包到了网络连接的另一端。

首先我们得鉴别数据包。

如果我们添加一个“数据包id”的概念会怎么样?让我们先给id赋一个整数值。我们能够从零开始,然后随着我们每发送一个数据包,增加一个数值。我们发送的第一个数据包就是“包0”,发送的第100个数据包就是“包99”。

这实际上是一个相当普遍的技术。甚至于在TCP中也得到了应用!这些数据包id叫做序列号,然而我们并不打算像TCP那样去做来实现可靠性,使用相同的术语是有意义的,因此从现在起我们还将称之为序列号。

因为UDP并不能保证数据包的顺序,所以第100个收到的数据包并不一定是第100个发出的数据包。接下来我们需要在数据包中插入序列号这样网络连接另一端电脑便能够知道是哪个数据包。

我们在前一篇文章中已经有了一个简单的关于虚拟网络连接的数据头,因此我们将只需要像这样在数据头中插入序列号:

[uint protocol id]

[uint sequence]

(packet data…)

现在当其他电脑收到一个数据包时通过发送数据包的电脑它就能知道数据包的序列号啦。

应答系统

既然我们已经能够使用序列号来鉴别数据包,下一步就该是让网络连接的另一端知道我们收到了哪个包了。

逻辑上来说这是非常简单的,我们只需要记录我们收到的每个包的序列号,然后把那些序列号发回发送他们的电脑即可。

因为我们是在两个机器间相互发送数据包,我们只能在数据包头添加上确认字符,就像我们加上序列号一样:

[uint protocol id]

[uint sequence]

[uint ack]

(packet data…)

我们的一般方法如下:

  • 每次我们发送一个数据包我们就增加本地序列号。
  • 当我们接收一个数据包时,我们将这个数据包的序列号与最近收到的数据包的序列号(称之为远程序列号)进行核对。如果这个包时间更近,我们就更新远程序列号使之等于这个数据包的序列号。
  • 当我们编写数据包头时,本地序列号就变成了数据包的序列号,而远程序列号则变成确认字符。

这个简单的应答系统工作条件是每当我们发出一个数据包就会接收到一个数据包。

但如果数据包一起发送这样在我们发送一个数据包之前有两个数据包到达该怎么办呢?我们每个数据包只留了一个确认字符的位置,那我们该怎么处理呢?

现在考虑网络连接中的一端用更快的速率发送数据包这种情况。如果客户端每秒发送30个数据包,而服务器每秒只发送10个数据包,这样从服务器发出的每个数据包我们至少需要3个确认字符。

让我们想得更复杂点!如果数据包留下来了而确认字符丢失了会怎么样?这样发送这个数据包的电脑会认为这个数据包已经丢失了而实际上它已经被收到了!

貌似我们需要让我们的可靠性系统……更加可靠一点!

可靠的应答系统

这就是我们偏离TCP的地方。

TCP的做法是在确认字符发送的地方给下一个按顺序预期该收到的数据包序列号的位置维持一个移动窗口。如果TCP对于一个已经发出的数据包没有收到确认字符,它将暂停并重新发送那个对应序列号的数据包。这正是我们想要避免的做法!

因此在我们的可靠性系统里,我们从不为一个已经发出的序列号重新发送数据包,我们精确地只排序一次n,然后我们发送n+1,n+2,依次类推。如果数据包n丢失了我们也从不暂停重新发送它,而是把它留给应用程序来编写一个包含丢失数据的新的数据包,必要的话,这个包还会用一个新的序列号发送。

因为我们工作的方式与TCP不同,它的做法现在可能在我们数据包的确定字符设置中有了个洞,因此现在仅仅陈述最近的数据包的序列号已经远远不够了。

我们需要在每个数据包中包含多个确认字符。

那我们需要多少确认字符呢?

正如之前提到网络连接的一端发包速率比另一端快的情况,让我们假定最糟的情况是一端每秒钟发送不少于10个数据包,而另一端每秒钟发送不多于30个数据包。这种情况下,我们每个数据包需要的平均确认字符数是3个,但是如果数据包发送密集点,我们将需要更多。让我们说6-10个最差的情况。

如果因为包含确认字符的数据包丢失而导致确认字符并没有到达怎么办?

为了解决这个问题,我们将要使用一种经典的使用冗余码的网络设计策略来处理数据包丢失的情况!

让我们在每个数据包中容纳33个确认字符,而且这不仅是他将要达到33个,而是一直是33个。因此对于每一个发出的确认字符我们多余地把它额外多发送了多达32次,仅仅是以防某个包含确认字符的数据包不能通过!

但是我们怎么可能在一个数据包里配置33个确认字符呢?每个确认字符4字节那就是132字节了!

窍门是在“相应确认字符”之前使用一段位域来代表32个之前的确认字符,就像这样:

[uint protocol id]

[uint sequence]

[uint ack]

[uint ack bitfield]

(packet data…)

我们这样规定“位域”中每一位对应“相应确认字符”之前的32个确认字符。因此让我们说“相应确认字符”是100。如果位域的第一位设置好了,那么这个数据包也包含包99的一个确认字符。如果第二位设置好了,那么它也包含包98的一个确认字符。这样一路下来就到了包68的第32位。

我们调整过的算法看起来就像这样:

  • 每次我们发送一个数据包我们就增加本地序列号。
  • 当我们接收一个数据包时,我们将这个数据包的序列号与最近收到的数据包的序列号(称之为远程序列号)进行核对。如果这个包是更新的,我们就更新远程序列号使之等于数据包的序列号。
  • 当我们编写数据包头时,本地序列号就变成了数据包的序列号,而远程序列号则变成确认字符。 计算确认字符位域是通过寻找一个多达33个数据包的队列,其中包括在[远程序列号-32,远程序列号]范围内的序列号。如果序列号“远程序列号-n”正在接收队列中那就把确认字符位域中的位n(在[1,32]范围内)设置为位1。
  • 此外,当一个数据包被接收了,确认字符位域也被扫描了,如果位n设置好了,那么即使它还没有被应答,我们也认可序列号“远程序列号-n”。

利用这个改善过的算法,你将可能不得不在不止一秒内丢掉100%的数据包而不是让一个数据包停止通过。当然,它能够轻松地处理不同的发包速率和接受一起发送的数据包。

检测丢包

既然我们知道网络连接另一端接受的是哪些数据包,那么我们该怎么检测数据包的丢失呢?

这次的窍门是反过来想,如果你在一定时间内还没有收到某个数据包的应答,那么我们可以考虑说那个数据包已经丢失了。

考虑到我们正在以每秒不超过30包的速率发送数据包,而且我们正在多余地发送数据包大概三十次。如果你在一秒内没有收到某个数据包的确认字符,那很有可能就是这个数据包已经丢失了。

因此我们在这儿用了一些小窍门,尽管我们能100%确定哪个数据包通过了,但是我们只能适度地确定那些没有到达的数据包。

这种情况的复杂性在于任何你重新发送的使用了这种可靠性方法的数据需要有它自己的信息id,这样的话在你多次收到它的时候你可以放弃它。这在应用层级是能够做到的。

应对环绕式处理的序列号

如果序列号没有环绕式处理覆盖,那么对于序列号和确认字符的讨论是不完整的!

序列号和确认字符都是32比特的无符号整数,因此它们能够代表在范围[0,4294967295]内的数字。那是一个非常大的数字!那么大以至于如果你每秒发送三十个数据包也将要花费四年半来把这个序列号环绕式处理回零。

但是可能你想要节省一些带宽这样你将你的序列号和确认字符缩减到到16比特整数。你每个数据包节省了4个字节,但现在他们只需要在仅仅半个小时内即可完成环绕式处理!

所以我们该怎么应对这种环绕式处理的情况呢?

诀窍是要认识到如果当前序列号已经非常高了,而且下一个到达的序列号很低,那么你就必须进行环绕式处理。那么即使新的序列号数值上比当前序列号值更低它也能实际代表一个更新的数据包。

举个例子,让我们假设我们用一个字节编码序列号(顺便说一下并不推荐这样做)。 :)), 之后他们就会在255后面进行环绕式处理,就像这样:

… 252, 253, 254, 255, 0, 1, 2, 3,…

为了解决这种情况我们需要一个能够意识到在255之后需要环绕式处理回零这样一个事实的新功能,这样0,1,2,3就会被认为比255更新。否则,我们的可靠性系统就会在你收到包255后停止工作。

这就是那个新功能:

boolsequence_more_recent( unsigned int s1,

unsigned int s2,

unsigned int max )

{

return

( s1 > s2 ) &&

( s1 - s2 <= max/2 )

||

( s2 > s1 ) &&

( s2 - s1 > max/2 );

}

这个功能通过比较两个数字和他们的不同来工作。如果它们之间的差异少于1/2的最大序列号值,那么它们必须靠在一起– 因此我们只需要照常检查某个序列号是否比另一个大。然而,如果它们相差很多,它们之间的差异将会比1/2的最大序列号值大,那么如果它比当前序列号小我们反而认为这个序列号是更新的。

这最后一点是显然需要环绕式处理序列号的地方,那么0,1,2就会被认为比255更新。

多么简洁而巧妙!

一定要确保你在你所做的任何序列号处理当中包含了这一步!

避免拥堵

当你已经解决了可靠性的问题的时候,还有避免拥堵的问题。当你获得TCP的可靠性的时候TCP已经提供了避免拥堵的功能作为数据包的一部分,但是UDP无论怎样都不会有避免拥堵!

如果我们仅仅发送数据包而没有某种流量控制,我们正在冒险占满网络连接而且会引起严重的延迟(2秒以上!),正如我们和另外一台电脑之间的路由器会超负荷而缓冲数据包。这个发生是因为路由器很努力地想要尝试传送我们发送的所有数据包,因此在它们考虑丢弃数据包之前会在队列中缓冲数据包。

然而如果我们能告诉路由器我们的数据包是时间敏感的而且如果路由器超载的话这些数据包应该丢弃而不是缓冲这样会很棒的,但只有我们重写世界上所有路由器的软件才能做到这一点!

那么我们反而需要把重点放在我们实际上能做的是避免占满首位网络连接。

做到这个的方法是实施我们自己的基础避免拥堵算法。我强调基础!就像可靠性,我们并不寄希望于像TCP第一次尝试应用那样普通而粗暴地想出某些东西,那么让我们让它尽可能简单吧。

衡量往返时间

因为所有避免拥堵的要点就是避免占满网络连接和避免增加往返时间(RTT),关于我们是不是占满网络的最重要的衡量标准是RTT它本身的观点是有道理的。

我们需要一种方法来衡量我们网络连接的RTT。

这是基础的技巧:

  • 对我们发送的每个数据包,我们对数据包队列中包含的序列号和他们发送的时间添加一个登记。
  • 当我们收到一个应答时,我们找到这个登记, 然后记录我们收到这个应答的时间t1与我们发送数据包的时间的t2的差值(都基于本地时间来计算)。这就是是这个数据包的RTT时间。
  • 因为数据包的到达因网络波动而不同,我们需要缓和这个值来提供某些有意义的东西,这样每次我们获得一个新的RTT我们就移动一个我们当前的RTT和数据包的RTT之间距离的百分比。10%在实践中看起来效果很好。这就叫做一个指数级平滑移动平均值,而且它在用一个低通滤波器的情况下能有效地平滑RTT中的杂音。
  • 为了确保发送队列永不增长,一旦超过某些最大预期RTT值我们就丢弃数据包。正如上一节关于可靠性讨论过的,任何在一秒内未应答的数据包都极有可能丢失了,那么对于最大RTT来说,一秒是个很棒的值。

既然我们有RTT,我们能把它作为一个衡量标准来推动我们的避免拥堵功能。如果RTT变得太大了,我们更缓慢地发送数据,如果它的值低于可接受范围,我们能努力更频繁地发送数据。

简单的好坏机制避免拥堵

正如之前讨论的,我们不要那么贪心,我们将要执行一个非常基础的避免拥堵机制。这个避免拥堵机制有两种模式。好和坏。我把它叫做简单的二进制避免拥堵。

让我们假设你在发送一个确定大小的数据包,就假设256字节吧。你想要每秒发送这些数据包30次,但是如果网络条件差,你可以削减为每秒10次。

那么30次256字节的数据包的速率大概是64kbits/sec,每秒10次的话大概20kbits/sec。世界上没有一个宽带连接不能处理至少20kbits/sec的速率,所以我们在这样的假定下继续前进。不像TCP这样对有任何数量的发送/接受带宽的任何设备都完全通用,我们将假设一个设备的最小支持带宽来参与我们的网络连接。

所以基础想法就是这样了。当网络条件好的时候我们每秒发送30个数据包,当网络条件差的时候我们降至每秒10个数据包。

当然,你能随你喜爱定义好和坏,但是仅考虑RTT的时候我已经得到了好的成效。举个例子,如果RTT超过某些极限值(假设250ms)那你就知道你可能已经正占满了网络连接。当然,这里假设一般没人在非占满网络条件下超过250ms,考虑到我们的宽带要求这是合理的。。

好和坏之间你会怎么转换?我喜欢用下列操作的算法:

  • 如果你当前在好模式下,而网络条件突然变坏,立即降至坏模式。
  • 如果你正在坏模式下,而且网络条件已经好了一段特定时长”t”,那么回到好模式。
  • 为了避免好模式和坏模式之间的快速切换,如果你从好模式降至坏模式持续10秒钟以内,从坏模式回到好模式之前的时间是”t”的两倍。在某些最大值中固定这个时间值,假设60秒。
  • 为了避免打击良好的网络连接,当它们有一小段时期的差连接时,每过10秒连接就处于好模式,把坏模式回到好模式之前的时间“t”减半。在某些最小值中固定这个时间值,例如1秒。

利用这个算法,你将对差网络连接迅速反应然后降低你的发送速率至每秒10个数据包,避免占满网络。在网络条件好时,你也将谨慎地尝试好模式,坚持以更高的每秒发送30个数据包的速率发送数据包。

当然,你也能实施复杂得多的算法,丢包率百分比甚至是网络波动(数据包确认字符的时间差异)都可以考虑作为一个衡量标准,而不仅仅是RTT。

对于避免拥堵你还可以更贪心点,并尝试发现什么时候你能以一个更高的带宽(例如LAN)发送数据,但是你必须非常小心!随着贪婪心的增加你占满网络连接的风险也在增大!

结语

我们全新的可靠性系统让我们稳定流畅发送数据包,而且能通知我们收到了什么数据包。从这我们能推断出丢失的数据包,必要的话重新发送没有通过的数据。

基于此我们有了能够取决于网络条件在每秒10次和每秒30次发包速率间轮流切换的一个简单的避免拥堵系统,因此我们不会占满网络连接。

还有很多实施细节因为太具体而不能在这篇文章一一提到,所以务必确保你检查示例源代码来看是否它都被实施了。

这就是关于可靠性,排序和避免拥堵的一切了,或许是低层次网络设计中最复杂的一面了。


【版权声明】

原文作者未做权利声明,视为共享知识产权进入公共领域,自动获得授权;


源码下载

因 Gaffer On Games 的源码原下载地址失效, 所以特地补上.

请点击

游戏网络开发三之基于UDP的虚拟连接

Posted on 11-16-2016 | In GS

原文

原文出处

Introduction

Hi, I’m Glenn Fiedler and welcome to Networking for Game Programmers.


In the previous article we sent and received packets over UDP. Since UDP is connectionless, one UDP socket can be used to exchange packets with any number of different computers. In multiplayer games however, we usually only want to exchange packets between a small set of connected computers.


As the first step towards a general connection system, we’ll start with the simplest case possible: creating a virtual connection between two computers on top of UDP.


But first, we’re going to dig in a bit deeper about how the Internet really works!


The Internet NOT a series of tubes


In 2006, Senator Ted Stevens made internet history with his famous speech on the net neutrality act:


“The internet is not something that you just dump something on. It’s not a big truck. It’s a series of tubes”

When I first started using the Internet, I was just like Ted. Sitting in the computer lab in University of Sydney in 1995, I was “surfing the web” with this new thing called Netscape Navigator, and I had absolutely no idea what was going on.


You see, I thought each time you connected to a website there was some actual connection going on, like a telephone line. I wondered, how much does it cost each time I connect to a new website? 30 cents? A dollar? Was somebody from the university going to tap me on the shoulder and ask me to pay the long distance charges? :)


Of course, this all seems silly now.


There is no switchboard somewhere that directly connects you via a physical phone line to the other computer you want to talk to, let alone a series of pneumatic tubes like Sen. Stevens would have you believe.


No Direct Connections


Instead your data is sent over Internet Protocol (IP) via packets that hop from computer to computer.


A packet may pass through several computers before it reaches its destination. You cannot know the exact set of computers in advance, as it changes dynamically depending on how the network decides to route packets. You could even send two packets A and B to the same address, and they may take different routes.


On unix-like systems can inspect the route that packets take by calling “traceroute” and passing in a destination hostname or IP address.


On windows, replace “traceroute” with “tracert” to get it to work.


Try it with a few websites like this:


    traceroute slashdot.org
traceroute amazon.com
traceroute google.com
traceroute bbc.co.uk
traceroute news.com.au

Take a look and you should be able to convince yourself pretty quickly that there is no direct connection.


How Packets Get Delivered


In the first article, I presented a simple analogy for packet delivery, describing it as somewhat like a note being passed from person to person across a crowded room.


While this analogy gets the basic idea across, it is much too simple. The Internet is not a flat network of computers, it is a network of networks. And of course, we don’t just need to pass letters around a small room, we need to be able to send them anywhere in the world.


It should be pretty clear then that the best analogy is the postal service!


When you want to send a letter to somebody you put your letter in the mailbox and you trust that it will be delivered correctly. It’s not really relevant to you how it gets there, as long as it does. Somebody has to physically deliver your letter to its destination of course, so how is this done?


Well first off, the postman sure as hell doesn’t take your letter and deliver it personally! It seems that the postal service is not a series of tubes either. Instead, the postman takes your letter to the local post office for processing.


If the letter is addressed locally then the post office just sends it back out, and another postman delivers it directly. But, if the address is is non-local then it gets interesting! The local post office is not able to deliver the letter directly, so it passes it “up” to the next level of hierarchy, perhaps to a regional post office which services cities nearby, or maybe to a mail center at an airport, if the address is far away. Ideally, the actual transport of the letter would be done using a big truck.


Lets be complicated and assume the letter is sent from Los Angeles to Sydney, Australia. The local post office receives the letter and given that it is addressed internationally, sends it directly to a mail center at LAX. The letter is processed again according to address, and gets routed on the next flight to Sydney.


The plane lands at Sydney airport where an entirely different postal system takes over. Now the whole process starts operating in reverse. The letter travels “down” the hierarchy, from the general, to the specific. From the mail hub at Sydney Airport it gets sent out to a regional center, the regional center delivers it to the local post office, and eventually the letter is hand delivered by a mailman with a funny accent. Crikey! :)


Just like post offices determine how to deliver letters via their address, networks deliver packets according to their IP address. The low-level details of this delivery and the actual routing of packets from network to network is actually quite complex, but the basic idea is that each router is just another computer, with a routing table describing where packets matching sets of addresses should go, as well as a default gateway address describing where to pass packets for which there is no matching entry in the table. It is routing tables, and the physical connections they represent that define the network of networks that is the Internet.


The job of configuring these routing tables is up to network administrators, not programmers like us. But if you want to read more about it, then this article from ars technica provides some fascinating insight into how networks exchange packets between each other via peering and transit relationships. You can also read more details about routing tables in this linux faq, and about the border gateway protocol on wikipedia, which automatically discovers how to route packets between networks, making the internet a truly distributed system capable of dynamically routing around broken connectivity.


Virtual Connections


Now back to connections.


If you have used TCP sockets then you know that they sure look like a connection, but since TCP is implemented on top of IP, and IP is just packets hopping from computer to computer, it follows that TCP’s concept of connection must be a virtual connection.


If TCP can create a virtual connection over IP, it follows that we can do the same over UDP.


Lets define our virtual connection as two computers exchanging UDP packets at some fixed rate like 10 packets per-second. As long as the packets are flowing, we consider the two computers to be virtually connected.


Our connection has two sides:



  • One computer sits there and listens for another computer to connect to it. We’ll call this computer the server.

  • Another computer connects to a server by specifying an IP address and port. We’ll call this computer the client.


In our case, we only allow one client to connect to the server at any time. We’ll generalize our connection system to support multiple simultaneous connections in a later article. Also, we assume that the IP address of the server is on a fixed IP address that the client may directly connect to.


Protocol ID


Since UDP is connectionless our UDP socket can receive packets sent from any computer.


We’d like to narrow this down so that the server only receives packets sent from the client, and the client only receives packets sent from the server. We can’t just filter out packets by address, because the server doesn’t know the address of the client in advance. So instead, we prefix each UDP packet with small header containing a 32 bit protocol id as follows:


    [uint protocol id]
(packet data…)

The protocol id is just some unique number representing our game protocol. Any packet that arrives from our UDP socket first has its first four bytes inspected. If they don’t match our protocol id, then the packet is ignored. If the protocol id does match, we strip out the first four bytes of the packet and deliver the rest as payload.


You just choose some number that is reasonably unique, perhaps a hash of the name of your game and the protocol version number. But really you can use anything. The whole point is that from the point of view of our connection based protocol, packets with different protocol ids are ignored.


Detecting Connection


Now we need a way to detect connection.


Sure we could do some complex handshaking involving multiple UDP packets sent back and forth. Perhaps a client “request connection” packet is sent to the server, to which the server responds with a “connection accepted” sent back to the client, or maybe an “i’m busy” packet if a client tries to connect to server which already has a connected client.


Or… we could just setup our server to take the first packet it receives with the correct protocol id, and consider a connection to be established.


The client just starts sending packets to the server assuming connection, when the server receives the first packet from the client, it takes note of the IP address and port of the client, and starts sending packets back.


The client already knows the address and port of the server, since it was specified on connect. So when the client receives packets, it filters out any that don’t come from the server address. Similarly, once the server receives the first packet from the client, it gets the address and port of the client from “recvfrom”, so it is able to ignore any packets that don’t come from the client address.


We can get away with this shortcut because we only have two computers involved in the connection. In later articles, we’ll extend our connection system to support more than two computers in a client/server or peer-to-peer topology, and at this point we’ll upgrade our connection negotiation to something more robust.


But for now, why make things more complicated than they need to be?


Detecting Disconnection


How do we detect disconnection?


Well if a connection is defined as receiving packets, we can define disconnection as not receiving packets.


To detect when we are not receiving packets, we keep track of the number of seconds since we last received a packet from the other side of the connection. We do this on both sides.


Each time we receive a packet from the other side, we reset our accumulator to 0.0, each update we increase the accumulator by the amount of time that has passed.


If this accumulator exceeds some value like 10 seconds, the connection “times out” and we disconnect.


This also gracefully handles the case of a second client trying to connect to a server that has already made a connection with another client. Since the server is already connected it ignores packets coming from any address other than the connected client, so the second client receives no packets in response to the packets it sends, so the second client times out and disconnects.


Conclusion


And that’s all it takes to setup a virtual connection: some way to establish connection, filtering for packets not involved in the connection, and timeouts to detect disconnection.


Our connection is as real as any TCP connection, and the steady stream of UDP packets it provides is a suitable starting point for a multiplayer action game.


Now that you have your virtual connection over UDP, you can easily setup a client/server relationship for a two player multiplayer game without TCP.

译文

译文出处

译者:张华栋(wcby) 审校:崔国军(飞扬971)


序言


大家好,我是Glenn Fiedler,欢迎阅读《针对游戏程序员的网络知识》系列教程的第三篇文章。

在之前的文章中,我向你展示了如何使用UDP协议来发送和接收数据包。

由于UDP协议是无连接的传输层协议,一个UDP套接字可以用来与任意数目的不同电脑进行数据包交换。但是在多人在线网络游戏中,我们通常只需要在一小部分互相连接的计算机之间交换数据包。


作为实现通用连接系统的第一步,我们将从最简单的可能情况开始:创建两台电脑之间构建于UDP协议之上的虚拟连接。

但是首先,我们将对互联网到底是如何工作的进行一点深度挖掘!




互联网不是一连串的管子


在2006年,参议院特德·史蒂文斯(Ted Stevens) 用他关于互联网中立(netneutrality)法案的著名演讲创造了互联网的历史:

”互联网不是那种你随便丢点什么东西进去就能运行的东西。它不是一个大卡车。它是一连串的管子“

当我第一次开始使用互联网的时候,我也像Ted一样无知。那是1995年,我坐在悉尼大学的计算机实验室里,在用一种叫做Netscape的网络浏览器(最早最热门的网页浏览工具)“在网上冲浪(surfing the web)“,那个时候我对发生了什么根本一无所知。

你看那个时候,我觉得每次连到一个网站上就一定有某个真实存在的连接在帮我们传递信息,就像电话线一样。那时候我在想,当我每次连到一个新的网站上需要花费多少钱? 30美分吗?一美元吗? 会有大学里的某个人过来拍拍我的肩膀让我付长途通信的费用么?

当然,现在回头看那时候一切的想法都非常的愚蠢。

并没有在某个地方存在一个物理交换机用物理电话线将你和你希望通话的某个电脑直接连起来。更不用说像参议院史蒂文斯想让你相信的那样存在一串气压输送管。



没有直接的连接


相反你的数据是基于IP协议(InternetProtocol)通过在电脑到电脑之间发送数据包来传递信息的。

一个数据包可能在到达它的目的地之前要经过几个电脑。你没有办法提前知道数据包会经过具体哪些电脑,因为它会依赖当前网络的情况对数据包进行路由来动态的改变路径。甚至有可能给同一个地址发送A和B两个数据包,这两个数据包都采用不同的路由。这就是为什么UDP协议不能保证数据包的到达顺序。(其实这么说稍微容易有点引起误解,TCP协议是能保证数据包的到达顺序的,但是他也是基于IP协议进行数据包的发送,并且往同一个地址发送的两个数据包也有可能采用完全不同的路由,这主要是因为TCP在自己这一层做了一些控制而UDP没有,所以导致TCP协议可以保证数据包的有序性,而UDP协议不能,当然这种保证需要付出性能方面的代价)。
在类unix的系统中可以通过调用“traceroute”函数并传递一个目的地主机名或IP地址来检查数据包的路由。

在Windows系统中,可以用“tracert”代替“traceroute”,其他不变,就能检查数据包的路由了。

像下面这样用一些网址来尝试下这种方法:

traceroute slashdot.org

traceroute amazon.com

traceroute google.com

traceroute bbc.co.uk

traceroute news.com.au

运行下看下输出结果,你应该很快就能说服你自己确实连接到了网站上,但是并没有一个直接的连接。



数据包是如何传递到目的地的?



在第一篇文章中,我对数据包传递到目的地这个事情做了一个简单的类比,把这个过程描述的有点像在一个拥挤的房间内一个人接着一个人的把便条传递下去。

虽然这个类比的基本思想还是表达出来了,但是它有点过于简单了。互联网并不是电脑组成的一个平面的网络,实际上它是网络的网络。当然,我们不只是要在一个小房间里面传递信件,我们要做的事能够把信息传递到全世界。


这就应该很清楚了,数据包传递到目的地的最好的类比是邮政服务!

当你想给某人写信的时候,你会把你的信件放到邮箱里并且你相信它将正确的传递到目的地。这封信件具体是怎么到达目的地的和你并不是十分相关,尽管它是否正确到达会对你有影响。当然会有某个人在物理上帮你把信件传递到目的地,所以这是怎么做的呢?

首先,邮递员肯定不需要自己去把你的信件送到目的地!看起来邮政服务也不是一串管子。相反,邮递员是把你的信件带到当地的邮政部门进行处理。

如果这封信件是发送给本地的,那么邮政部门就会把这封信件发送回来,另外一个邮递员会直接投递这封信件。但是,如果这封信件不是发送给本地的,那么这个处理过程就有意思了!当地的邮政部门不能直接投递这封信件,所以这封信件会被向上传递到层次结构的上一层,这个上一层也许是地区级的邮政部门它会负责服务附近的几个城市,如果要投递的地址非常远的话,这个上一层也许是位于机场的一个邮件中心。理想情况下,信件的实际运输将通过一个大卡车来完成。

让我们通过一个例子来把上面说的过程具体的走一遍,假设有一封信件要从洛杉矶发送到澳大利亚的悉尼。当地的邮政部门收到信件以后考虑到这封信件是一封跨国投递的信件,所以会直接把它发送到位于洛杉矶机场的邮件中心。在那里,这封信件会再次根据它的地址进行处理,并被安排通过下一个到悉尼的航班投递到悉尼去。

当飞机降落到悉尼机场以后,一个完全不同的邮政系统会负责接管这封信件。现在整个过程开始逆向操作。这封信件会沿着层次结构向下传递,从大的管理部门到具体的投递区域。这封信件会从悉尼机场的邮件中心被送往一个地区级的中心,然后地区级的中心会把这封信件投递到当地的邮政部门,最终这封信件会是由一个操着有趣的本地口音的邮政人员用手投递到真正的目的地的。哎呀! !

就像邮局是通过信件的地址来决定这些信件是该如何投递的一样,网络也是根据这些数据包的IP地址来决定它们是该如何传递的。投递机制的底层细节以及数据包从网络到网络的实际路由其实都是相当复杂的,但是基本的想法都是一样的,就是每个路由器都只是另外一台计算机,它会携带一张路由表用来描述如果数据包的IP地址匹配了这张表上的某个地址集,那么这个数据包该如何传递,这张表还会记载着默认的网关地址,如果数据包的IP地址和这张路由表上的一个地址都匹配不上,那么这个数据包该传递到默认的网关地址那里。其实是路由表以及它们代表的物理连接定义了网络的网络,也就是互联网(互联网也被称为万维网)。

因特网于1969年诞生于美国。最初名为“阿帕网”(ARPAnet)是一个军用研究系统,后来又成为连接大学及高等院校计算机的学术系统,则已发展成为一个覆盖五大洲150多个国家的开放型全球计算机网络系统,拥有许多服务商。普通电脑用户只需要一台个人计算机用电话线通过调制解调器和因特网服务商连接,便可进入因特网。但因特网并不是全球唯一的互联网络。例如在欧洲,跨国的互联网络就有“欧盟网”(Euronet),“欧洲学术与研究网”(EARN),“欧洲信息网”(EIN),在美国还有“国际学术网”(BITNET),世界范围的还有“飞多网”(全球性的BBS系统)等。但这些网络其实根本就不需要知道,感谢IP协议的帮助,只要知道他们是可以互联互通的就可以。

这些路由表的配置工作是由网络管理员完成的,而不是由像我们这样的程序员来做。但是如果你想要了解这方面的更多内容, 那么来自ars technica的这篇文章将提供网络是如何在端与端之间互联来交换数据包以及传输关系方面一些非常有趣的见解。你还可以通过linux常见问题中路由表(routing tables)方面的文章以及维基百科上面的边界网关协议(border gateway protocol )的解释来获得更多的细节。边界网关协议是用来自动发现如何在网络之间路由数据包的协议,有了它才真正的让互联网成为一个分布式系统,能够在不稳定的连接里面进行动态的路由。

边界网关协议(BGP)是运行于 TCP 上的一种自治系统的路由协议。 BGP 是唯一一个用来处理像因特网大小的网络的协议,也是唯一能够妥善处理好不相关路由域间的多路连接的协议。 BGP 构建在 EGP 的经验之上。 BGP 系统的主要功能是和其他的 BGP 系统交换网络可达信息。网络可达信息包括列出的自治系统(AS)的信息。这些信息有效地构造了 AS 互联的拓朴图并由此清除了路由环路,同时在 AS 级别上可实施策略决策。



虚拟的连接


现在让我们回到连接本身。

如果你已经使用过TCP套接字,那么你会知道它们看起来真的像是一个连接,但是由于TCP协议是在IP协议之上实现的,而IP协议是通过在计算机之间进行跳转来传递数据包的,所以TCP的连接仍然是一个虚拟连接。

如果TCP协议可以基于IP协议建立虚拟连接,那么我们在UDP协议上所做的一切都可以应用于TCP协议上。

让我们给虚拟连接下个定义:两个计算机之间以某个固定频率比如说每秒10个数据包来交换UDP的数据包。只要数据包仍然在传输,我们就认为这两台计算机之间存在一个虚拟连接。

我们的连接有两侧:

  • 一个计算机坐在那儿侦听是否有另一台计算机连接到它。我们称负责监听的这台计算机为服务器(server)。
  • 另一台计算机会通过一个指定的IP地址和端口连接到一个服务器。我们称主动连接的这台电脑为客户端(client)。

在我们的场景里,我们只允许一个客户端在任意的时候连接到服务器。我们将在下一篇文章里面拓展我们的连接系统以支持多个客户端的同时连接。此外,我们假定服务器的IP地址是一个固定的IP地址,客户端可以随时直接连接上来。我们将在后面的文章里面介绍匹配(matchmaking)和NAT打穿(NATpunch-through)。




协议ID


由于UDP协议是无连接的传输层协议,所以我们的UDP套接字可以接受来自任何电脑的数据包。

我们想要缩小接收数据包的范围,以便我们的服务器只接收那些从我们的客户端发送出来的数据包,并且我们的客户端只接收那些从我们的服务端发送出来的数据包。我们不能只通过地址来过滤我们的数据包,因为服务器没有办法提前知道客户端的地址。所以,我们会在每一个UDP数据包前面加上一个包含32位协议id的头,如下所示:

[uint protocol id]

(packet data…)

协议ID只是一些独特的代表我们的游戏协议的数字。我们的UDP套接字收到的任意数据包首先都要检查数据包的首四位。如果它们和我们的协议ID不匹配的话,这个数据包就会被忽略。如果它们和我们的协议ID匹配的话,我们会剔除数据包的第一个四个字节并把剩下的部分发给我们的系统进行处理。

你只要选择一些非常独特的数字就可以了,这些数字可以是你的游戏名字和协议版本号的散列值。不过说真的,你可以使用任何东西。这种做法的重点是把我们的连接视为基于协议进行通信的连接,如果协议ID不同,那么这样的数据包将被丢弃掉。




检测连接


现在我们需要一个方法来检测连接。

当然我们可以实现一些复杂的握手协议,牵扯到多个UDP数据包来回传递。比如说客户端发送一个”请求连接(request connection)“的数据包给服务器,当服务器收到这个数据包的时候会回应一个”连接接受(connection accepted)“的数据包给客户端,或者如果这个服务器已经有超过一个连接的客户端以后,会回复一个“我很忙(i’m busy)”的数据包给客户端。

或者。。我们可以设置我们的服务器,让它以它收到的第一个数据包的协议ID作为正确的协议ID,并在收到第一个数据包的时候就认为连接已经建立起来了。

客户端只是开始给服务器发送数据包,当服务器收到客户端发过来的第一个数据包的时候,它会记录下客户端的IP地址和端口号,然后开始给客户端回包。

客户端已经知道了服务器的地址和端口,因为这些信息是在连接的时候指定的。所以当客户端收到数据包的时候,它会过滤掉任何不是来自于服务器地址的数据包。同样的,一旦服务器收到客户端的第一个数据包,它就会从“recvfrom”函数里面得到客户端的地址和端口号,所以它也可以忽略任何不是发自客户端地址的数据包。

我们可以通过一个捷径来避开这个问题,因为我们的系统只有两台计算机会建立连接。在后面的文章里,我们将拓展我们的连接系统来支持超过两台计算机参与客户端/服务器或者端对端(peer-to-peer,p2p)网络模型,并且在那个时候我们会升级我们的连接协议方式来让它变得更加健壮。

但是现在,为什么我们要让事情变得超出需求的复杂度呢?(作者的意思是因为我们现在不需要解决这个问题,因为我们的场景是面对只有两台计算机的情况,所以我们可以先放过这个问题。)




检测断线的情况


我们该如何检测断线(disconnection)的情况?

那么,如果一个连接被定义为接收数据包,我们可以定义断线为收不到数据包。

为了检测什么时候开始我们收不到数据包,我们要记录上一次我们从连接的另外一侧收到数据包到现在过去了多少秒,我们在连接的两侧都做了这个事情。

每次我们从连接的另外一端收到数据包的时候,我们都会重置我们的计数器为0.0,每一次更新的时候我们都会把这次更新到上一次更新逝去的时间量加到计数器上。

如果计数器的值超过某一个值,比如说10秒,那么我们就认定这个连接“超时”了并且我们会断开连接。

这也可以很优雅的处理当服务器已经与一个客户端建立连接以后,有第二个客户端试图与服务器建立连接的情况。因为服务器已经建立了连接,它会忽略掉不是来自连接的客户端地址发出来的数据包,所以第二个客户端在发出了数据包以后得不到任何回应,这样它就会判断连接超时并断开连接。


总结


而这一切都需要设置一个虚拟连接:用某种方法建立一个连接,过滤掉那些不是来自这个连接的数据包,并且如果发现连接超时就断开连接。

我们的连接就跟任何TCP连接一样真实,并且UDP数据包构成的稳定数据流为多人在线动作网络游戏提供了一个很好的起点。

我们还获得了一些互联网是如何路由数据包的见解。举个例子来说,我们现在知道UDP数据包有时候会在到达的时候是乱序的原因是因为它们在IP层传输的时候采用不同的路由!看下互联网的地图,你会不会对你的数据包能够到达正确的目的点感到非常的神奇?如果你想对这个问题进行更加深入的了解,维基百科上的这篇文章(Internet backbone)是一个很好的起点。

现在,既然你已经有了一个基于UDP协议的虚拟连接,你可以轻松的在两个玩家的多人在线游戏里面设置一个客户端/服务器关系而不需要使用TCP协议。

你可以在这篇文章的示例源代码(examplesource code )找到一个具体实现。

这是一个简单的客户端/服务器程序,每秒交换30个数据包。你可以在任意你喜欢的机器上运行这个服务器,只要给它提供一个公共的IP地址就可以了,需要公共IP地址的原因是我们目前还不支持NAT打穿(NAT punch-through )。

NAT穿越(NATtraversal)涉及TCP/IP网络中的一个常见问题,即在处于使用了NAT设备的私有TCP/IP网络中的主机之间建立连接的问题。

像这样来运行客户端:

./Client 205.10.40.50

它会尝试连接到你在命令行输入的地址。如果你不输入地址的话,默认情况下它会连接到127.0.0.1。

当一个客户端已经与服务器建立连接的时候,你可以尝试用另外一个客户端来连接这个服务器,你会注意到这次连接的尝试失败了。这么设计是故意的。因为到目前为止,一次只允许一个客户端连接上服务器。

你也可以在客户端和服务器连接的状态下尝试停止客户端或者服务器,你会注意到10秒以后连接的另外一侧会判断连接超时并断开连接。当客户端超时的时候它会退到shell窗口,但是服务器会退到监听状态为下一次的连接做好准备。

预告下接下来的一篇文章的题目:《基于UDP的可靠、有序和拥塞避免的传输》,欢迎继续阅读。

如果你喜欢这篇文章的话,请考虑对我做一个小小的捐赠。捐款会鼓励我写更多的文章!(原文作者在原文的地址上提供了一个捐赠网址,有兴趣的读者可以在文章开始的地方找到原文地址进行捐赠)



【版权声明】

原文作者未做权利声明,视为共享知识产权进入公共领域,自动获得授权。


源码下载

因 Gaffer On Games 的源码原下载地址失效, 所以特地补上.

请点击

游戏网络开发二之数据的发送与接收

Posted on 11-15-2016 | In GS

原文

原文出处

Introduction

Hi, I’m Glenn Fiedler and welcome to Networking for Game Programmers.


In the previous article we discussed options for sending data between computers and decided to use UDP instead of TCP for time critical data.


In this article I am going to show you how to send and receive UDP packets.


BSD sockets


For most modern platforms you have some sort of basic socket layer available based on BSD sockets.


BSD sockets are manipulated using simple functions like “socket”, “bind”, “sendto” and “recvfrom”. You can of course work directly with these functions if you wish, but it becomes difficult to keep your code platform independent because each platform is slightly different.


So although I will first show you BSD socket example code to demonstrate basic socket usage, we won’t be using BSD sockets directly for long. Once we’ve covered all basic socket functionality we’ll abstract everything away into a set of classes, making it easy to you to write platform independent socket code.


Platform specifics


First let’s setup a define so we can detect what our current platform is and handle the slight differences in sockets from one platform to another:


    // platform detection

#define PLATFORM_WINDOWS 1
#define PLATFORM_MAC 2
#define PLATFORM_UNIX 3

#if defined(_WIN32)
#define PLATFORM PLATFORM_WINDOWS
#elif defined(APPLE)
#define PLATFORM PLATFORM_MAC
#else
#define PLATFORM PLATFORM_UNIX
#endif

Now let’s include the appropriate headers for sockets. Since the header files are platform specific, we’ll use the platform #define to include different sets of files depending on the platform:


    #if PLATFORM == PLATFORM_WINDOWS

#include <winsock2.h>

#elif PLATFORM == PLATFORM_MAC ||
PLATFORM == PLATFORM_UNIX

#include <sys/socket.h>
#include <netinet/in.h>
#include <fcntl.h>

#endif

Sockets are built in to the standard system libraries on unix-based platforms so we don’t have to link to any additonal libraries. However, on Windows we need to link to the winsock library to get socket functionality.


Here is a simple trick to do this without having to change your project or makefile:


    #if PLATFORM == PLATFORM_WINDOWS
#pragma comment( lib, "wsock32.lib" )
#endif

I like this trick because I’m super lazy. You can always link from your project or makefile if you wish.


Initializing the socket layer


Most unix-like platforms (including macosx) don’t require any specific steps to initialize the sockets layer, however Windows requires that you jump through some hoops to get your socket code working.


You must call “WSAStartup” to initialize the sockets layer before you call any socket functions, and “WSACleanup” to shutdown when you are done.


Let’s add two new functions:


    bool InitializeSockets()
{
#if PLATFORM == PLATFORM_WINDOWS
WSADATA WsaData;
return WSAStartup( MAKEWORD(2,2),
&WsaData )
== NO_ERROR;
#else
return true;
#endif
}

void ShutdownSockets()
{
#if PLATFORM == PLATFORM_WINDOWS
WSACleanup();
#endif
}

Now we have a platform independent way to initialize the socket layer.


Creating a socket


It’s time to create a UDP socket, here’s how to do it:


    int handle = socket( AF_INET,
SOCK_DGRAM,
IPPROTO_UDP );

if ( handle <= 0 )
{
printf( "failed to create socket\n" );
return false;
}

Next we bind the UDP socket to a port number (eg. 30000). Each socket must be bound to a unique port, because when a packet arrives the port number determines which socket to deliver to. Don’t use ports lower than 1024 because they are reserved for the system. Also try to avoid using ports above 50000 because they used when dynamically assigning ports.


Special case: if you don’t care what port your socket gets bound to just pass in “0” as your port, and the system will select a free port for you.


    sockaddr_in address;
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port =
htons( (unsigned short) port );

if ( bind( handle,
(const sockaddr) &address,
sizeof(sockaddr_in) ) < 0 )
{
printf( "failed to bind socket\n" );
return false;
}

Now the socket is ready to send and receive packets.


But what is this mysterious call to “htons” in the code above? This is just a helper function that converts a 16 bit integer value from host byte order (little or big-endian) to network byte order (big-endian). This is required whenever you directly set integer members in socket structures.


You’ll see “htons” (host to network short) and its 32 bit integer sized cousin “htonl” (host to network long) used several times throughout this article, so keep an eye out, and you’ll know what is going on.


Setting the socket as non-blocking


By default sockets are set in what is called “blocking mode”.


This means that if you try to read a packet using “recvfrom”, the function will not return until a packet is available to read. This is not at all suitable for our purposes. Video games are realtime programs that simulate at 30 or 60 frames per second, they can’t just sit there waiting for a packet to arrive!


The solution is to flip your sockets into “non-blocking mode” after you create them. Once this is done, the “recvfrom” function returns immediately when no packets are available to read, with a return value indicating that you should try to read packets again later.


Here’s how put a socket in non-blocking mode:


    #if PLATFORM == PLATFORM_MAC ||
PLATFORM == PLATFORM_UNIX

int nonBlocking = 1;
if ( fcntl( handle,
F_SETFL,
O_NONBLOCK,
nonBlocking ) == -1 )
{
printf( "failed to set non-blocking\n" );
return false;
}

#elif PLATFORM == PLATFORM_WINDOWS

DWORD nonBlocking = 1;
if ( ioctlsocket( handle,
FIONBIO,
&nonBlocking ) != 0 )
{
printf( "failed to set non-blocking\n" );
return false;
}

#endif

Windows does not provide the “fcntl” function, so we use the “ioctlsocket” function instead.


Sending packets


UDP is a connectionless protocol, so each time you send a packet you must specify the destination address. This means you can use one UDP socket to send packets to any number of different IP addresses, there’s no single computer at the other end of your UDP socket that you are connected to.


Here’s how to send a packet to a specific address:


    int sent_bytes =
sendto( handle,
(const char)packet_data,
packet_size,
0,
(sockaddr)&address,
sizeof(sockaddr_in) );

if ( sent_bytes != packet_size )
{
printf( "failed to send packet\n" );
return false;
}

Important! The return value from “sendto” only indicates if the packet was successfully sent from the local computer. It does not tell you whether or not the packet was received by the destination computer. UDP has no way of knowing whether or not the the packet arrived at its destination!


In the code above we pass a “sockaddr_in” structure as the destination address. How do we setup one of these structures?


Let’s say we want to send to the address 207.45.186.98:30000


Starting with our address in this form:


    unsigned int a = 207;
unsigned int b = 45;
unsigned int c = 186;
unsigned int d = 98;
unsigned short port = 30000;

We have a bit of work to do to get it in the form required by “sendto”:


    unsigned int address = ( a << 24 ) |
( b << 16 ) |
( c << 8 ) |
d;

sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = htonl( address );
addr.sin_port = htons( port );

As you can see, we first combine the a,b,c,d values in range [0,255] into a single unsigned integer, with each byte of the integer now corresponding to the input values. We then initialize a “sockaddr_in” structure with the integer address and port, making sure to convert our integer address and port values from host byte order to network byte order using “htonl” and “htons”.


Special case: if you want to send a packet to yourself, there’s no need to query the IP address of your own machine, just pass in the loopback address 127.0.0.1 and the packet will be sent to your local machine.


Receiving packets


Once you have a UDP socket bound to a port, any UDP packets sent to your sockets IP address and port are placed in a queue. To receive packets just loop and call “recvfrom” until it fails with EWOULDBLOCK indicating there are no more packets to receive.


Since UDP is connectionless, packets may arrive from any number of different computers. Each time you receive a packet “recvfrom” gives you the IP address and port of the sender, so you know where the packet came from.


Here’s how to loop and receive all incoming packets:


    while ( true )
{
unsigned char packet_data[256];

unsigned int max_packet_size =
sizeof( packet_data );

#if PLATFORM == PLATFORM_WINDOWS
typedef int socklen_t;
#endif

sockaddr_in from;
socklen_t fromLength = sizeof( from );

int bytes = recvfrom( socket,
(char)packet_data,
max_packet_size,
0,
(sockaddr)&from,
&fromLength );

if ( bytes <= 0 )
break;

unsigned int from_address =
ntohl( from.sin_addr.s_addr );

unsigned int from_port =
ntohs( from.sin_port );

// process received packet
}

Any packets in the queue larger than your receive buffer will be silently discarded. So if you have a 256 byte buffer to receive packets like the code above, and somebody sends you a 300 byte packet, the 300 byte packet will be dropped. You will not receive just the first 256 bytes of the 300 byte packet.


Since you are writing your own game network protocol, this is no problem at all in practice, just make sure your receive buffer is big enough to receive the largest packet your code could possibly send.


Destroying a socket


On most unix-like platforms, sockets are file handles so you use the standard file “close” function to clean up sockets once you are finished with them. However, Windows likes to be a little bit different, so we have to use “closesocket” instead:


#if PLATFORM == PLATFORM_MAC ||
PLATFORM == PLATFORM_UNIX
close( socket );
#elif PLATFORM == PLATFORM_WINDOWS
closesocket( socket );
#endif

Hooray windows.


Socket class


So we’ve covered all the basic operations: creating a socket, binding it to a port, setting it to non-blocking, sending and receiving packets, and destroying the socket.


But you’ll notice most of these operations are slightly platform dependent, and it’s pretty annoying to have to remember to #ifdef and do platform specifics each time you want to perform socket operations.


We’re going to solve this by wrapping all our socket functionality up into a “Socket” class. While we’re at it, we’ll add an “Address” class to make it easier to specify internet addresses. This avoids having to manually encode or decode a “sockaddr_in” structure each time we send or receive packets.


So let’s add a socket class:


    class Socket
{
public:

Socket();

~Socket();

bool Open( unsigned short port );

void Close();

bool IsOpen() const;

bool Send( const Address & destination,
const void data,
int size );

int Receive( Address & sender,
void * data,
int size );

private:

int handle;
};

and an address class:


    class Address
{
public:

Address();

Address( unsigned char a,
unsigned char b,
unsigned char c,
unsigned char d,
unsigned short port );

Address( unsigned int address,
unsigned short port );

unsigned int GetAddress() const;

unsigned char GetA() const;
unsigned char GetB() const;
unsigned char GetC() const;
unsigned char GetD() const;

unsigned short GetPort() const;

private:

unsigned int address;
unsigned short port;
};

Here’s how to to send and receive packets with these classes:


    // create socket

const int port = 30000;

Socket socket;

if ( !socket.Open( port ) )
{
printf( "failed to create socket!\n" );
return false;
}

// send a packet

const char data[] = "hello world!";

socket.Send( Address(127,0,0,1,port), data, sizeof( data ) );

// receive packets

while ( true )
{
Address sender;
unsigned char buffer[256];
int bytes_read =
socket.Receive( sender,
buffer,
sizeof( buffer ) );
if ( !bytes_read )
break;

// process packet
}

As you can see it’s much simpler than using BSD sockets directly.


As an added bonus the code is the same on all platforms because everything platform specific is handled inside the socket and address classes.


Conclusion


You now have a platform independent way to send and receive packets. Enjoy :)

译文

译文出处





因译文很多地方均有疏漏, 本文已经对部分疏漏做了修正.

翻译:杨嘉鑫(矫情到死的仓鼠君,)审校:赵菁菁(轩语轩缘)


$hhd$1>序言

大家好,我是Glenn Fiedler,欢迎阅读《针对游戏程序员的网络知识》系列教程的第二篇文章。

在前面的文章中我们讨论了在不同计算机之间发送数据的方法,并决定使用用户数据报协议(UDP)而非传输控制协议(TCP)。我们之所以使用用户数据报协议(UDP),是因为它能够使数据在不等待重发包而造成数据聚集的情况下按时被送达。

现在我将要告诉各位如何使用用户数据报协议(UDP)发送和接收数据包。


$hhd$1>伯克利套接字 (BSD socket)

对于大多数现代的平台来说你都可以找到建立在伯克利套接字上的sockets。伯克利套接字主要通过“socket”,“bind”, “sendto” and “recvfrom”几个简单函数进行控制。如果你愿意的话你当然可以直接对这几个函数进行调用,但是由于每个平台之间有细微差别,保持代码平台的独立性将会变得有些困难。因此,尽管我将先给各位介绍伯克利套接字的示例代码用以说明它的基本使用功能,我们也不会大量的直接使用伯克利套接字。所以当我们掌握了所有基础socket 功能后,我们将会把所有内容汇总到一个系列的课中,以便你可以轻松地编写代码。


$hhd$1>平台的特殊性

首先 我们先建立一个“define”程序用来测试我们现有的平台是什么,这样我们就可以发现不同平台间间各个socket里的细微差别。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// platform detection
#define PLATFORM_WINDOWS 1
#define PLATFORM_MAC 2
#define PLATFORM_UNIX 3
#if defined(_WIN32)
#define PLATFORM PLATFORM_WINDOWS
#elif defined(APPLE)
#define PLATFORM PLATFORM_MAC
#else
#define PLATFORM PLATFORM_UNIX
#endif

接下来我们为sockets写入适当的标头,由于头文件具有平台的特殊性所以我们将使用“#define”来根据不同的平台引用不同的文件。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#if PLATFORM == PLATFORM_WINDOWS
#include <winsock2.h>
#elif PLATFORM == PLATFORM_MAC ||
PLATFORM == PLATFORM_UNIX
#include <sys socket.h=””>
#include <netinet in.h=””>
#include <fcntl.h>
#endif</fcntl.h></netinet></sys></winsock2.h>


如果sockets是建立在unix平台上,我们就不需要任何其他多余的连接,若它是建立在windows系统里,为了确保socket正常使用我们就需要连接到“winsock”库内。

以下是一个简单的技巧,它可以在不改变已有项目或生成文件的前提下完成上述工作。

?
1
2
3
4
5
#if PLATFORM == PLATFORM_WINDOWS
#pragma comment( lib, “wsock32.lib” )
#endif


我之所以非常喜欢这个小技巧是因为我太懒了~当然啦,如果你愿意每次都进行项目链接或生成文件也未尝不可。


$hhd$1>socket层的初始化

大多数“unix-like”的平台 (包括macosx) 是不需要任何特殊的步骤去初始化socket层的。但是Windows需要进行一些特殊设置来确保你的sockets代码正常工作。在你使用其他任何sockets功能前你必须先调用 “WSAStartup” 来初始化它们,在你的程序段结束时你也必须使用 “WSACleanup”来结束。

下面让我们来添加以上两个新功能:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
bool InitializeSockets()
{
#if PLATFORM == PLATFORM_WINDOWS
WSADATA WsaData;
return WSAStartup( MAKEWORD(2,2), &WsaData ) == NO_ERROR;
#else
return true;
#endif
}
void ShutdownSockets()
{
#if PLATFORM == PLATFORM_WINDOWS
WSACleanup();
#endif
}


这样我们就得到了一个初始化socket层的方法。对于那些不需要socket初始化的平台来说这些功能可以忽略不计。


$hhd$1>建立一个socket

现在是时候来建立一个基于用户数据报协议(UDP)的socket了,下面是实施的方法:

?
1
2
3
4
5
6
7
8
int handle = socket( AF_INET, SOCK_DGRAM,IPPROTO_UDP );
if ( handle <= 0 )
{
printf( “failed to create socket\n” );
return false;
}

接下来我们把用户数据报协议(UDP)的socket对应到一个端口上(比如30000这个端口)。每一个socket都必须对应到一个独一无二的端口上。这么做的原因是端口号决定了每个数据包发送到的位置。不要使用1024以下的端口,因为这是为系统调用所预留的。

有一种特殊情况,如果你不在乎socket指定到哪个端口上,你就可以输入“0”,这样系统将会自动为你选择一个闲置的端口。

?
1
2
3
4
5
6
7
8
9
10
11
12
sockaddr_in address;
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons( (unsigned short) port );
if ( bind( handle, (const sockaddr) &address, sizeof(sockaddr_in) ) < 0 )
{
printf( “failed to bind socket\n” );
return false;
}

这样我们的socket已经准备就绪并可以发送和接收包了。

那么上面提到的“htons”是起什么作用呢?这是一个辅助功能,它将一个16位整数的值由主机字节序列(小端或大端)转换成网络字节序列(大端)。这就要求你在任何时候都直接在socket结构里设置整数数字。

你会看到“htons”(主机到网络短字节)及其32位整数大小的表兄妹”htonl”(主机到网络长字节)这在这篇文章中被多次使用,你留意了以后你在下文中再次遇到就会明白。


$hhd$1>将socket设置为非阻塞形式

默认情况下,socket是被设置在 “阻塞模式”的状态下。这意味着,如果你想使用“recvfrom”功能读一个包,在一个数据包被读取前该函数值将不能被返回。这与我们的目标完全不符。视频游戏是拟态在30或60帧每秒实时的程序,他们不能只是坐在那里等待数据包的到达!

解决方案是你将socket转换成以“非阻塞模式”后再创建他们。一旦做到这一点,当没有包可供阅读时,“recvfrom”函数就可以立即返回,返回值显示你应该稍后再尝试读取包。

下面是如何将socket设置为非阻塞模式的方法:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#if PLATFORM == PLATFORM_MAC || PLATFORM == PLATFORM_UNIX
int nonBlocking = 1;
if ( fcntl( handle, F_SETFL, O_NONBLOCK, nonBlocking ) == -1 )
{
printf( “failed to set non-blocking\n” );
return false;
}
#elif PLATFORM == PLATFORM_WINDOWS
DWORD nonBlocking = 1;
if ( ioctlsocket( handle, FIONBIO, &nonBlocking ) != 0 )
{
printf( “failed to set non-blocking\n” );
return false;
}
#endif

从上面的程序我们可以发现,Windows本身并不提供“框架”的功能,所以我们使用“ioctlsocket”功能来实现。


$hhd$1>发送数据包

用户数据报协议(UDP)是一种无连接协议,所以每次你发送一个数据包前都要指定一个目的地址。你可以使用一个用户数据报协议(UDP)发送数据包到任意数量的不同的IP地址,而在你用户数据报协议(UDP) socket的另一端并没有连接某一台计算机。

下面是如何发送一个数据包到一个特定的地址方法:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
int sent_bytes = sendto( handle,
(const char)packet_data,
packet_size,
0,
(sockaddr)&address,
sizeof(sockaddr_in) );
if ( sent_bytes != packet_size )
{
printf( “failed to send packet\n” );
return false;
}

很重要的一点!“sendto”的返回值只是表明数据包是否被成功地从本地计算机发送,它并不能表明目标计算机是否成功接收到你的数据包!用户数据报协议(UDP)没有办法知道数据包是否能到达目的地。

上面的代码中,我们通过“sockaddr_in”结构为目的地址。

那么我们如何设置这些结构呢?

现在让我们以发送到207.45.186.98:30000 这个地址为例

我们从以下这个程序开始:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
unsigned int a = 207;
unsigned int b = 45;
unsigned int c = 186;
unsigned int d = 98;
unsigned short port = 30000;
我们还要在形式上进行设置从而符合“sendto”的要求:
unsigned int address = ( a << 24 ) |
( b << 16 ) |
( c << 8 ) | d;
sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = htonl( address );
addr.sin_port = htons( port );

正如您所看到的,我们首先将A、B、C、D值在范围[ 0, 255 ]内的值转化为一个单一的无符号整数,从而使这个整数的每个字节对应输入值。然后以整数地址和端口来初始化一个“sockaddr_in”结构,这样就确保使用“htonl” 和“htons”来将整型地址和端口值从主机字节序列转换为为网络字节序列。

一种特殊情况:如果你想给自己发送一个数据包,不需要查询自己机器的IP地址,在回送地址127.0.0.1中数据包就将被发送到你的本地机器。


$hhd$1>接收数据包

一旦你将一个用户数据报协议(UDP)套接字绑定到一个端口,任何发送到您scoket IP地址和端口的用户数据报协议(UDP)数据包都将放在一个队列里。接收数据包的话, 只需要循环调用 “recvfrom”函数直到他失败并返回”EWOULDBLOCK”,这就意味着队列里有没有留下其他的数据包了。由于用户数据报协议(UDP)是无连接性的,数据包可以到达许多不同的计算机。每当你收到一个数据包,“recvfrom”都会给你发送者的IP地址和端口以便你知道这是来自哪里的数据包。

下面是如何进行循环接收传入的数据包的方法:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
while ( true )
{
unsigned char packet_data[256];
unsigned int max_packet_size = sizeof( packet_data );
#if PLATFORM == PLATFORM_WINDOWS
typedef int socklen_t;
#endif
sockaddr_in from;
socklen_t fromLength = sizeof( from );
int bytes = recvfrom( socket,
(char)packet_data,
max_packet_size,
0,
(sockaddr)&from,
&fromLength );
if ( bytes <= 0 )
break;
unsigned int from_address = ntohl( from.sin_addr.s_addr );
unsigned int from_port = ntohs( from.sin_port );
// process received packet
}

在队列中,数据包一旦大于你接收缓冲区的范围,他们都会被系统悄悄舍弃。因此,如果你有一个256字节的缓冲区用来接收数据包,有人给你一个发送300字节的数据包,300字节的数据包都将被删除。您将不会接收到300字节数据包的前256个字节。

因为您正在编写自己的游戏网络协议,以上这些操作这是没有什么影响的。

在实践中您就要确保您的接收缓冲区足够大,以接收最大的数据包。


$hhd$1>关闭一个socket

在大多数Unix平台,一旦你完成了自己所需的程序后,在socket文件中只要使用标准的文件“close”函数来清理即可。然而,在Windows系统中以上情形会有点不同,我们要用“closesocket”函数来操作:

?
1
2
3
4
5
6
7
8
9
#if PLATFORM == PLATFORM_MAC || PLATFORM == PLATFORM_UNIX
close( socket );
#elif PLATFORM == PLATFORM_WINDOWS
closesocket( socket );
#endif

$hhd$1>Socket class

现在,我们已经完成了所有的基本操作:创建一个socket,将他绑定到一个端口并设置为非阻塞,发送和接收数据包,清除socket。

但是你会发现以上这些操作中多多少少都是依赖于平台的,在每一次你想执行socket操作时,你不得不记住“# ifdef”指令和针对不同平台的各种细节,这些繁琐的操作是很令人抓狂的。

为了解决这个问题,我们可以将所有的socket功能封装成一个“socket class‘’。当我们在使用它的时候,我们将添加一个“Address class‘’,这样使它更容易指定互联网地址。这避免了我们每次发送或接收数据包时进行手动编码或解码“sockaddr_in”结构。

下面是“socket class‘’的程序:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
class Socket
{
public:
Socket();
~Socket();
bool Open( unsigned short port );
void Close();
bool IsOpen() const;
bool Send( const Address & destination,
const void data,
int size );
int Receive( Address & sender,
void * data,
int size );
private:
int handle;
};


下面是“address class”的程序:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
class Address
{
public:
Address();
Address( unsigned char a,
unsigned char b,
unsigned char c,
unsigned char d,
unsigned short port );
Address( unsigned int address,
unsigned short port );
unsigned int GetAddress() const;
unsigned char GetA() const;
unsigned char GetB() const;
unsigned char GetC() const;
unsigned char GetD() const;
unsigned short GetPort() const;
private:
unsigned int address;
unsigned short port;
};

下面是这些class如何接收和发送数据包的程序:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
// create socket
const int port = 30000;
Socket socket;
if ( !socket.Open( port ) )
{
printf( “failed to create socket!\n” );
return false;
}
// send a packet
const char data[] = “hello world!”;
socket.Send( Address(127,0,0,1,port), data, sizeof( data ) );
// receive packets
while ( true )
{
Address sender;
unsigned char buffer[256];
int bytes_read =
socket.Receive( sender,
buffer,
sizeof( buffer ) );
if ( !bytes_read )
break;
// process packet
}


$hhd$1>结论

我们现在有了一种不限平台的方法来发送和接收用户数据报协议(UDP)的数据包。

用户数据报协议(UDP)是无连接性的,因此我编写了一个简单的示例程序,它可以从文本文件中读取IP地址,并能够每秒向这些地址发送一个数据包。每当这个程序接收到一个数据包时,它就会告诉你它们来自哪个机器,以及接收到的数据包的大小。

您可以很容易地设置它,然后你就拥有了一系列在本地机器上互相发送数据包的节点。这样你就可以利用以下程序通过不同的端口,进入不同的应用程序:

> Node30000

> Node 30001

> Node 30002

etc…

然后每个节点都将尝试发送数据包到每个其他节点,它的工作原理就像一个小型的“peer-to-peer”设置。

我是在MacOSX系统中开发的这个程序,但我想你应该能够轻松地在任何Unix系统或Windows上对他进行编译。如果你有任何应用在其他不同机器上的兼容性补丁,也非常欢迎您与我取得联系。


【版权声明】

原文作者未做权利声明,视为共享知识产权进入公共领域,自动获得授权。


源码下载

因 Gaffer On Games 的源码原下载地址失效, 所以特地补上.

请点击

游戏网络开发一之TCPvsUDP

Posted on 11-14-2016 | In GS

原文

原文出处

Introduction

Hi, I’m Glenn Fiedler and welcome to Networking for Game Programmers.


In this article we start with the most basic aspect of network programming: sending and receiving data over the network. This is perhaps the simplest and most basic part of what network programmers do, but still it is quite intricate and non-obvious as to what the best course of action is.


You have most likely heard of sockets, and are probably aware that there are two main types: TCP and UDP. When writing a network game, we first need to choose what type of socket to use. Do we use TCP sockets, UDP sockets or a mixture of both? Take care because if you get this wrong it will have terrible effects on your multiplayer game!


The choice you make depends entirely on what sort of game you want to network. So from this point on and for the rest of this article series, I assume you want to network an action game. You know, games like Halo, Battlefield 1942, Quake, Unreal, CounterStrike and Team Fortress.


In light of the fact that we want to network an action game, we’ll take a very close look at the properties of each protocol, and dig a bit into how the internet actually works. Once we have all this information, the correct choice is clear.


TCP/IP


TCP stands for “transmission control protocol”. IP stands for “internet protocol”. Together they form the backbone for almost everything you do online, from web browsing to IRC to email, it’s all built on top of TCP/IP.


If you have ever used a TCP socket, then you know it’s a reliable connection based protocol. This means you create a connection between two machines, then you exchange data much like you’re writing to a file on one side, and reading from a file on the other.


TCP connections are reliable and ordered. All data you send is guaranteed to arrive at the other side and in the order you wrote it. It’s also a stream protocol, so TCP automatically splits your data into packets and sends them over the network for you.


IP


The simplicity of TCP is in stark contrast to what actually goes on underneath TCP at the IP or “internet protocol” level.


Here there is no concept of connection, packets are simply passed from one computer to the next. You can visualize this process being somewhat like a hand-written note passed from one person to the next across a crowded room, eventually, reaching the person it’s addressed to, but only after passing through many hands.


There is also no guarantee that this note will actually reach the person it is intended for. The sender just passes the note along and hopes for the best, never knowing whether or not the note was received, unless the other person decides to write back!


Of course IP is in reality a little more complicated than this, since no one computer knows the exact sequence of computers to pass the packet along to so that it reaches its destination quickly. Sometimes IP passes along multiple copies of the same packet and these packets make their way to the destination via different paths, causing packets to arrive out of order and in duplicate.


This is because the internet is designed to be self-organizing and self-repairing, able to route around connectivity problems rather than relying on direct connections between computers. It’s actually quite cool if you think about what’s really going on at the low level. You can read all about this in the classic book TCP/IP Illustrated.


UDP


Instead of treating communications between computers like writing to files, what if we want to send and receive packets directly?


We can do this using UDP.


UDP stands for “user datagram protocol” and it’s another protocol built on top of IP, but unlike TCP, instead of adding lots of features and complexity, UDP is a very thin layer over IP.


With UDP we can send a packet to a destination IP address (eg. 112.140.20.10) and port (say 52423), and it gets passed from computer to computer until it arrives at the destination or is lost along the way.


On the receiver side, we just sit there listening on a specific port (eg. 52423) and when a packet arrives from any computer (remember there are no connections!), we get notified of the address and port of the computer that sent the packet, the size of the packet, and can read the packet data.


Like IP, UDP is an unreliable protocol. In practice however, most packets that are sent will get through, but you’ll usually have around 1-5% packet loss, and occasionally you’ll get periods where no packets get through at all (remember there are lots of computers between you and your destination where things can go wrong…)


There is also no guarantee of ordering of packets with UDP. You could send 5 packets in order 1,2,3,4,5 and they could arrive completely out of order like 3,1,2,5,4. In practice, packets tend to arrive in order most of the time, but you cannot rely on this!


UDP also provides a 16 bit checksum, which in theory is meant to protect you from receiving invalid or truncated data, but you can’t even trust this, since 16 bits is just not enough protection when you are sending UDP packets rapidly over a long period of time. Statistically, you can’t even rely on this checksum and must add your own.


So in short, when you use UDP you’re pretty much on your own!


TCP vs. UDP


We have a decision to make here, do we use TCP sockets or UDP sockets?


Lets look at the properties of each:


TCP:



  • Connection based

  • Guaranteed reliable and ordered

  • Automatically breaks up your data into packets for you

  • Makes sure it doesn’t send data too fast for the internet connection to handle (flow control)

  • Easy to use, you just read and write data like its a file


UDP:



  • No concept of connection, you have to code this yourself

  • No guarantee of reliability or ordering of packets, they may arrive out of order, be duplicated, or not arrive at all!

  • You have to manually break your data up into packets and send them

  • You have to make sure you don’t send data too fast for your internet connection to handle

  • If a packet is lost, you need to devise some way to detect this, and resend that data if necessary

  • You can’t even rely on the UDP checksum so you must add your own


The decision seems pretty clear then, TCP does everything we want and its super easy to use, while UDP is a huge pain in the ass and we have to code everything ourselves from scratch.


So obviously we just use TCP right?


Wrong!


Using TCP is the worst possible mistake you can make when developing a multiplayer game! To understand why, you need to see what TCP is actually doing above IP to make everything look so simple.


How TCP really works


TCP and UDP are both built on top of IP, but they are radically different. UDP behaves very much like the IP protocol underneath it, while TCP abstracts everything so it looks like you are reading and writing to a file, hiding all complexities of packets and unreliability from you.


So how does it do this?


Firstly, TCP is a stream protocol, so you just write bytes to a stream, and TCP makes sure that they get across to the other side. Since IP is built on packets, and TCP is built on top of IP, TCP must therefore break your stream of data up into packets. So, some internal TCP code queues up the data you send, then when enough data is pending the queue, it sends a packet to the other machine.


This can be a problem for multiplayer games if you are sending very small packets. What can happen here is that TCP may decide it’s not going to send data until you have buffered up enough data to make a reasonably sized packet to send over the network.


This is a problem because you want your client player input to get to the server as quickly as possible, if it is delayed or “clumped up” like TCP can do with small packets, the client’s user experience of the multiplayer game will be very poor. Game network updates will arrive late and infrequently, instead of on-time and frequently like we want.


TCP has an option to fix this behavior called TCP_NODELAY. This option instructs TCP not to wait around until enough data is queued up, but to flush any data you write to it immediately. This is referred to as disabling Nagle’s algorithm.


Unfortunately, even if you set this option TCP still has serious problems for multiplayer games and it all stems from how TCP handles lost and out of order packets to present you with the “illusion” of a reliable, ordered stream of data.


How TCP implements reliability


Fundamentally TCP breaks down a stream of data into packets, sends these packets over unreliable IP, then takes the packets received on the other side and reconstructs the stream.


But what happens when a packet is lost?


What happens when packets arrive out of order or are duplicated?


Without going too much into the details of how TCP works because its super-complicated (please refer to TCP/IP Illustrated) in essence TCP sends out a packet, waits a while until it detects that packet was lost because it didn’t receive an ack (or acknowledgement), then resends the lost packet to the other machine. Duplicate packets are discarded on the receiver side, and out of order packets are resequenced so everything is reliable and in order.


The problem is that if we were to send our time critical game data over TCP, whenever a packet is dropped it has to stop and wait for that data to be resent. Yes, even if more recent data arrives, that new data gets put in a queue, and you cannot access it until that lost packet has been retransmitted. How long does it take to resend the packet?


Well, it’s going to take at least round trip latency for TCP to work out that data needs to be resent, but commonly it takes 2*RTT, and another one way trip from the sender to the receiver for the resent packet to get there. So if you have a 125ms ping, you’ll be waiting roughly 1/5th of a second for the packet data to be resent at best, and in worst case conditions you could be waiting up to half a second or more (consider what happens if the attempt to resend the packet fails to get through?). What happens if TCP decides the packet loss indicates network congestion and it backs off? Yes it actually does this. Fun times!


Never use TCP for time critical data


The problem with using TCP for realtime games like FPS is that unlike web browsers, or email or most other applications, these multiplayer games have a real time requirement on packet delivery.


What this means is that for many parts of a game, for example player input and character positions, it really doesn’t matter what happened a second ago, the game only cares about the most recent data.


TCP was simply not designed with this in mind.


Consider a very simple example of a multiplayer game, some sort of action game like a shooter. You want to network this in a very simple way. Every frame you send the input from the client to the server (eg. keypresses, mouse input controller input), and each frame the server processes the input from each player, updates the simulation, then sends the current position of game objects back to the client for rendering.


So in our simple multiplayer game, whenever a packet is lost, everything has to stop and wait for that packet to be resent. On the client game objects stop receiving updates so they appear to be standing still, and on the server input stops getting through from the client, so the players cannot move or shoot. When the resent packet finally arrives, you receive this stale, out of date information that you don’t even care about! Plus, there are packets backed up in queue waiting for the resend which arrive at same time, so you have to process all of these packets in one frame. Everything is clumped up!


Unfortunately, there is nothing you can do to fix this behavior, it’s just the fundamental nature of TCP. This is just what it takes to make the unreliable, packet-based internet look like a reliable-ordered stream.


Thing is we don’t want a reliable ordered stream.


We want our data to get as quickly as possible from client to server without having to wait for lost data to be resent.


This is why you should never use TCP when networking time-critical data!


Wait? Why can’t I use both UDP and TCP?


For realtime game data like player input and state, only the most recent data is relevant, but for other types of data, say perhaps a sequence of commands sent from one machine to another, reliability and ordering can be very important.


The temptation then is to use UDP for player input and state, and TCP for the reliable ordered data. If you’re sharp you’ve probably even worked out that you may have multiple “streams” of reliable ordered commands, maybe one about level loading, and another about AI. Perhaps you think to yourself, “Well, I’d really not want AI commands to stall out if a packet is lost containing a level loading command - they are completely unrelated!”. You are right, so you may be tempted to create one TCP socket for each stream of commands.


On the surface, this seems like a great idea. The problem is that since TCP and UDP are both built on top of IP, the underlying packets sent by each protocol will affect each other. Exactly how they affect each other is quite complicated and relates to how TCP performs reliability and flow control, but fundamentally you should remember that TCP tends to induce packet loss in UDP packets. For more information, read this paper on the subject.


Also, it’s pretty complicated to mix UDP and TCP. If you mix UDP and TCP you lose a certain amount of control. Maybe you can implement reliability in a more efficient way that TCP does, better suited to your needs? Even if you need reliable-ordered data, it’s possible, provided that data is small relative to the available bandwidth to get that data across faster and more reliably that it would if you sent it over TCP. Plus, if you have to do NAT to enable home internet connections to talk to each other, having to do this NAT once for UDP and once for TCP (not even sure if this is possible…) is kind of painful.


Conclusion


My recommendation is not only that you use UDP, but that you only use UDP for your game protocol. Don’t mix TCP and UDP! Instead, learn how to implement the specific features of TCP that you need inside your own custom UDP based protocol.


Of course, it is no problem to use HTTP to talk to some RESTful services while your game is running. I’m not saying you can’t do that. A few TCP connections running while your game is running isn’t going to bring everything down. The point is, don’t split your game protocol across UDP and TCP. Keep your game protocol running over UDP so you are fully in control of the data you send and receive and how reliability, ordering and congestion avoidance are implemented.


The rest of this article series show you how to do this, from creating your own virtual connection on top of UDP, to creating your own reliability, flow control and congestion avoidance.

译文

译文出处





$hhd$2 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>翻译:削微寒 审校:削微寒

$hhd$2 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>介绍

你一定听说过sokcet(初探socket),它分为两种常用类型:TCP和UDP。当要写一个网络游戏,我们首先要选择使用哪种类型的socket。是用TCP、UDP还是两者都用?

选择哪种类型,完全取决于你要写的游戏的类型。后面的文章,我都将假设你要写一个‘动作’网游。就像:光环系列,战地1942,雷神之锤,这些游戏。

我们将非常仔细的分析这两种socket类型的优劣,并且深入到底层,弄清楚互联网是如何工作的什么。当我们弄清楚这些信息后,就很容易做出正确的选择了。

$hhd$2 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>TCP/IP

TCP代表“传输控制协议”,IP代表:“互联网协议”,你在互联网上做任何事情,都是建立在这两者的基础上,比如:浏览网页、收发邮件等等。

$hhd$4 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>TCP

如果你曾经用过TCP socket,你肯定知道它是可靠连接的协议,面向连接的传输协议。简单的说:两台机器先建立起连接,然后两台机器相互发送数据,就像你在一台计算机上写文件,在另外一个台读文件一样。(我是这么理解的:TCP socket就像建立起连接的计算机,之间共享的一个‘文件‘对象,两者通过读写这个‘文件‘实现数据的传输)

这个连接是可靠的、有序的,代表着:发送的所有的数据,保证到达传输的另一端的时候。另一端得到的数据,和发送数据一摸一样(可靠,有序。例如:A发送数据‘abc’,通过TCPsocket传输数据到B,B得到数据一定是:‘abc’。而不是‘bca’或者‘xueweihan’之类的鬼!)。传输的数据是‘数据流’的形式(数据流:用于操作数据集合的最小的有序单位,与操作本地文件中的stream一样。所以TCP socket和文件对象很像),也就是说:TCP把你的数据拆分后,包装成数据包,然后通过网络发送出去。

注意:就像读写文件那样,这样比较好理解。

$hhd$4 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>IP

“IP”协议是在TCP协议的下面(这个牵扯到七层互联网协议栈,我就简单的贴个图不做详细的介绍)
游戏网络开发(一):UDP vs. TCP

“IP”协议是没有连接的概念,它做的只是把上一层(传输层)的数据包从一个计算传递到下一个计算机。你可以理解成:这个过程就像一堆人手递手传递纸条一样,传递了很多次,最终到达纸条上标记的xxx手里(纸条上写着‘xxx亲启,偷看者3cm’)。

在传递的过程中,不保证这个纸条(信件)能能够准确的送到收信人的手上。发信人发送信件,但是永远不知道信件是否可以准确到达收件人的手上,除非收件人回信告诉他(发信人):“兄弟我收到信了!”(IP层只是用于传递信息,并不做信息的校验等其它操作)

当然,传递信息的这个过程还是还是很复杂的。因为,不知道具体的传递次序,也就是说,因为不知道最优的传递路线(能够让数据包快速的到达目的地的最优路径)所以,有些时候“IP”协议就传递多份一样的数据,这些数据通过不同的路线到达目的地,从而发现最优的传递路线。

这就是互联网设计中的:自动优化和自动修复,解决了连接的问题。这真的是一个很酷的设计,如果你想知道更多的底层实现,可以阅读关于TCP/IP的书。(推荐上野宣的图解系列)

$hhd$4 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>UDP

如果我们想要直接发送和接受数据包,那么就要使用另一种socket。

我们叫它UDP。UDP代表“用户数据包协议”,它是另外一种建立在IP协议之上的协议,就像TCP一样,但是没有TCP那么多功能(例如:建立连接,信息的校验,数据流的拆分合并等)

使用UDP我们能够向目标IP和端口(例如80),发送数据包。数据包会达到目标计算机或者丢失。

收件人(目标计算机),我们只需要监听具体的端口(例如:80),当从任意一台计算机(注意:UDP是不建立连接的)接受到数据包后,我们会得知发送数据包的计算机地址(IP地址)和端口、数据包的大小、内容。

UDP是不可靠协议。现实使用的过程中,发送的大多数的数据包都会被接收到,但是通常会丢失1-5%,偶尔,有的时候还可能啥都接收不到(数据包全部丢失一个都没接收到,传递数据的计算机之间的计算机的数量越多,出错的概率越大)。

UDP协议中的数据包也是没有顺序的。比如:你发送5个包,顺序是1,2,3,4,5。但是,即接收到的顺序可能是3,1,4,2,5。现实使用的过程中,大多时候,接收到的数据的顺序是正确的,但是并不是每次都是这样。

最后,尽管UDP并没有比“IP”协议高级多少,而且不可靠。但是你发送的数据,要么全部到达,要么全部丢失。比如:你发送一个大小为256 byte的数据包给另外一台计算机,这台计算机不会只接收到100 byte的数据包,它只可能接收到256 byte的数据包,或者什么都没接收到。这是UDP唯一可以保证的事情,其它所有的事情都需要你来决定(我的理解,UDP协议只是个简单的传输协议,只保证数据包的完整性,注意是数据包而不是信息。其他的事情需要自己去做,完善这个协议,达到自己使用的需求。)

$hhd$2 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>TCP vs. UDP

我们如何选择是使用TCP socket还是UDPsocket呢?

我们先看看两者的特征吧:

TCP:

· 面向连接

· 可靠、有序

· 自动把数据拆分成数据包

· 确保数据的发送一直在控制中(流量控制)

· 使用简单,就像读写文件一样

UDP:

· 没有连接的概念,你需要自己通过代码实现(这个我也没自己实现过,应该还会讲)

· 不可靠,数据包无序,数据包可能无序,重复,或者丢失

· 你需要手动地把数据拆分成数据包,然后发送数据包

· 你需要自己做流量控制

· 如果数据包太多,你需要设计重发和统计机制

通过上面的描述,不难发现:TCP做了所有我们想做的事情,而且使用十分简单。反观UDP就十分难用了,我们需要自己编写设计一切。很显然,我们只要用TCP就好了!

不,你想的简单了(原来,是我太年轻了!)

当你开发一个像上面说过的FPS(动作网游)的时候使用TCP协议,会是一个错误的决定,这个TCP协议就不好用了!为什么这么说?那么你就需要知道TCP到底做了什么,使得一起看起来十分简单。(让我们继续往下看,这是我最好奇的地方!!!有没有兴奋起来?)

$hhd$4 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>TCP内部的工作原理

TCP和UDP都是建立在“IP”协议上的,但是它俩完全不同。UDP和“IP”协议很像,然而TCP隐藏了数据包的所有的复杂和不可靠的部分,抽象成了类似文件的对象。

那么TCP是如何做到这一点呢?

首先,TCP是一个数据流的协议,所以你只需要把输入的内容变成数据流,然后TCP协议就会确保数据会到达发送的目的地。因为“IP”协议是通过数据包传递信息,TCP是建立在“IP”协议之上,所以TCP必须把用户输入的数据流分成数据包的形式。TCP协议会对需要发送的数据进行排队,然后当有足够的排除数据的时候,就发送数据包到目标计算机。

当在多人在线的网络游戏中发送非常小的数据包的时候,这样做就有一个问题。这个时候会发生什么?如果数据没有达到缓冲区设定的数值,数据包是不会发送的。这就会出现个问题:因为客户端的用户输入请求后,需要尽快的从服务器得到响应,如果像上面TCP 等待缓冲区满后才发送的话,就会出现延时,那么客户端的用户体验就会非常差!网络游戏几乎不能出现延时,我们希望看到的是“实时”和流畅。

TCP有一个选项可以修复,上面说的那种等待缓冲区满才发送的情况,就是TCP_NODELAY。这个选项使得TCP socket不需要等待缓冲区满才发送,而是输入数据后就立即发送。

然而,即使你已经设置了TCP_NODELAY选项,在多人网游中还是会有一系列的问题。

这一切的源头都由于TCP处理丢包和乱序包的方式。使得你产生有序和可靠的“错觉”。

$hhd$4 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>TCP如何保证数据的可靠性

本质上TCP做的事情,分解数据流,成为数据包,使用在不可靠的“IP”协议,发送这些数据包。然后使得数据包到达目标计算机,然后重组成数据流。

但是,如何处理当丢包?如何处理重复的数据包和乱序数据包?

这里不会介绍TCP处理这些事情的细节,因为这些都是非常复杂的(想弄清楚的同学可以看我上面推荐的书单),大体上:TCP发送一个数据包,等待一段时间,直到检测到数据包丢失了,因为没有接收到它的ACK(一种传输类控制符号,用于确认接收无误),接下来就重新发送丢失的数据包到目标计算机。重复的数据包将被丢弃在接收端,乱序的数据包将被重新排序。所以保证了数据包的可靠性和有序性。

如果我们用TCP实现数据的实时传输,就会出现一个问题:TCP无论什么情况,只要数据包出错,就必须等待数据包的重发。也就是说,即使最新的数据已经到达,但还是不能访问这些数据包,新到的数据会被放在一个队列中,需要等待丢失的包重新发过来之后,所有数据没有丢失才可以访问。需要等待多长时间才能重新发送数据包?举个例子:如果的延时是125ms,那么需要最好的情况下重发数据包需要250ms,但是如果遇到糟糕的情况,将会等待500ms以上,比如:网络堵塞等情况。那就没救了。。。

$hhd$4 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>为什么TCP不应该用于对网络延时要求极高的条件下

如果FPS(第一人称射击)这类的网络游戏使用TCP就出现问题,但是web浏览器、邮箱、大多数应用就没问题,因为多人网络游戏有实时性的要求。比如:玩家输入角色的位置,重要的不是前一秒发生了什么,而是最新的情况!TCP并没有考虑这类需求,它并不是为这种需求而设计的。

这里举一个简单的多人网游的例子,比如射击的游戏。对网络的要求很简单。玩家通过客户端发送给服务器的每个场景(用鼠标和键盘输入的行走的位置),服务器处理每个用户发送过来的所有场景,处理完再返回给客户端,客户端解析响应,渲染最新的场景展示给玩家。

在上面说的哪个多人游戏的例子中,如果出现一个数据包丢失,所有事情都需要停下来等待这个数据包重发。客户端会出现等待接收数据,所以玩家操作的任务就会出现站着不动的情况(卡!卡!卡!),不能射击也不能移动。当重发的数据包到达后,你接收到这个过时的数据包,然而玩家并不关心过期的数据(激战中,卡了1秒,等能动了,都已经死了)

不幸的是,没有办法修复TCP的这个问题,这是它本质的东西,没办法修复。这就是TCP如何做到让不可靠,无序的数据包,看起来像有序,可靠的数据流。

我并不需要可靠,有序的数据流,我们希望的是客户端和服务端之间的延时越低越好,不需要等待重发丢失的包。

所以,这就是为什么在对数据的实时性要求的下,我们不用TCP。

$hhd$4 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>那为什么不UDP和TCP一起用呢?

像玩家输入实时游戏数据和状态的变更,只和最新的数据有关(这些数据强调实时性)。但是另外的一些数据,例如,从一台计算机发送给另外一个台计算机的一些列指令(交易请求,聊天?),可靠、有序的传输还是非常重要的!

那么,用户输入和状态用UDP,TCP用于可靠、有序的数据传输,看起来是个不错的点子。但是,问题在于TCP和UDP都是建立“IP”协议之上,所以协议之间都是发送数据包,从而相互通信。协议之间的互相影响是相当复杂的,涉及到TCP性能、可靠性和流量控制。简而言之,TCP会导致UDP丢包,请参考这篇论文

此外,UDP和TCP混合使用是非常复杂的,而且实现起来是非常痛苦的。(这段我就不翻译了,总而言之:不要混用UDP和TCP,容易失去对传输数据的控制)

$hhd$2 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>总结

我的建议并不是就一定要使用UDP,但是UDP协议应该用于游戏。请不要混合使用TCP和UDP,你应该学习TCP中一些地方是如何实现的技巧,然后可以把这些技巧用在UDP上,从而实现适合你的需求的协议(借鉴TCP中的实现,在UDP上,完善功能,从而达到你的需求)。

这个系列,接下来会讲到:如何在UDP上创建一个虚拟的连接(因为UDP本身,是没有连接的概念的)、如何使得UDP实现可靠性,流量控制,非阻塞。

$hhd$2 style=”margin: 11.25pt 0cm 1.5pt; background-image: initial; background-attachment: initial; background-size: initial; background-origin: initial; background-clip: initial; background-position: initial; background-repeat: initial;”>参考

· MBA lib数据流

· WiKi TCP/IP协议族

· W3SchoolTCP/IP 协议

· UDP和TCP的区



快速编译技巧

Posted on 11-01-2016 | In Linux

项目越来越大,每次需要重新编译整个项目都是一件很浪费时间的事情。Research了一下,找到以下可以帮助提高速度的方法,总结一下。

tmpfs

有人说在Windows下用了RAMDisk把一个项目编译时间从4.

5小时减少到了5分钟,也许这个数字是有点夸张了,不过粗想想,把文件放到内存上做编译应该是比在磁盘

上快多了吧,尤其如果编译器需要生成很多临时文件的话。

这个做法的实现成本最低,在Linux中,直接mount一个tmpfs就可以了。而且对所编译的工程没有任何要求,也不用改动编译环境。

mount -t tmpfs tmpfs ~/build -o size=1G

用2.6.32.2的Linux Kernel来测试一下编译速度:

  • 用物理磁盘:40分16秒

  • 用tmpfs:39分56秒

呃……没什么变化。看来编译慢很大程度上瓶颈并不在IO上面。但对于一个实际项目来说,

编译过程中可能还会有打包等IO密集的操作,所以只要可能,用tmpfs是有

益无害的。

当然对于大项目来说,你需要有足够的内存才能负担得起这个tmpfs的开销。

make -j

既然IO不是瓶颈,那CPU就应该是一个影响编译速度的重要因素了。

用make -j带一个参数,可以把项目在进行并行编译,比如在一台双核的机器上,完全可以用make -
j4,让make最多允许4个编译命令同时执行,这样可以更有效的利用CPU资源。

还是用Kernel来测试:

  • 用make: 40分16秒

  • 用make -j4:23分16秒

  • 用make -j8:22分59秒

由此看来,在多核CPU上,适当的进行并行编译还是可以明显提高编译速度的。但并行的任务不宜太多,一般是以CPU的核心数目的两倍为宜。

不过这个方案不是完全没有cost的,如果项目的Makefile不规范,没有正确的设置好依赖关系,并行编译的结果就是编译不能正常进行。如果依赖关系设置过于保守
,则可能本身编译的可并行度就下降了,也不能取得最佳的效果。

ccache

ccache用于把编译的中间结果进行缓存,以便在再次编译的时候可以节省时间。这对于玩Kernel来说实在是再好不过了,因为经常需要修改一些Kernel的代码,然后
再重新编译,而这两次编译大部分东西可能都没有发生变化。对于平时开发项目来说,也是一样。为什么不是直接用make所支持的增量编译呢?还是因为现实中,因
为Makefile的不规范,很可能这种“聪明”的方案根本不能正常工作,只有每次make clean再make才行。

安装完ccache后,可以在/usr/local/bin下建立gcc,g++,c++,cc的symbolic
link,链到/usr/bin/ccache上。总之确认系统在调用gcc等命令时会调用到ccache就可以了(通常情况下/usr/local/bin会在PATH中排在/usr/bin前面)。

继续测试:

  • 用ccache的第一次编译(make -j4):23分38秒

  • 用ccache的第二次编译(make -j4):8分48秒

  • 用ccache的第三次编译(修改若干配置,make -j4):23分48秒

看来修改配置(我改了CPU类型…)对ccache的影响是很大的,因为基本头文件发生变化后,就导致所有缓存数据都无效了,必须重头来做。但如果只是修改一些.
c文件的代码,ccache的效果还是相当明显的。而且使用ccache对项目没有特别的依赖,布署成本很低,这在日常工作中很实用。

可以用ccache -s来查看cache的使用和命中情况:

cache directory /home/lifanxi/.ccache

cache hit 7165

cache miss 14283

called for link 71

not a C/C++ file 120

no input file 3045

files in cache 28566

cache size 81.7 Mbytes

max cache size 976.6 Mbytes

可以看到,显然只有第二编次译时cache命中了,cache miss是第一次和第三次编译带来的。两次cache占用了81.7M的磁盘,还是完全可以接受的。

distcc

一台机器的能力有限,可以联合多台电脑一起来编译。这在公司的日常开发中也是可行的,因为可能每个开发人员都有自己的开发编译环境,它们的编译器版本一般
是一致的,公司的网络也通常具有较好的性能。这时就是distcc大显身手的时候了。

使用distcc,并不像想象中那样要求每台电脑都具有完全一致的环境,它只要求源代码可以用make -j并行编译,并且参与分布式编译的电脑系统中具有相同的编译
器。因为它的原理只是把预处理好的源文件分发到多台计算机上,预处理、编译后的目标文件的链接和其它除编译以外的工作仍然是在发起编译的主控电脑上完成,
所以只要求发起编译的那台机器具备一套完整的编译环境就可以了。

distcc安装后,可以启动一下它的服务:

/usr/bin/distccd –daemon –allow 10.64.0.0/16

默认的3632端口允许来自同一个网络的distcc连接。

然后设置一下DISTCC_HOSTS环境变量,设置可以参与编译的机器列表。

通常localhost也参与编译,但如果可以参与编译的机器很多,则可以把localhost从这个列表

中去掉,这样本机就完全只是进行预处理、分发和链接了,编译都在别的机器上完成。

因为机器很多时,localhost的处理负担很重,所以它就不再“兼职”编译了。

export DISTCC_HOSTS="localhost 10.64.25.1 10.64.25.2 10.64.25.3"

然后与ccache类似把g++,gcc等常用的命令链接到/usr/bin/distcc上就可以了。

在make的时候,也必须用-j参数,一般是参数可以用所有参用编译的计算机CPU内核总数的两倍做为并行的任务数。

同样测试一下:

  • 一台双核计算机,make -j4:23分16秒

  • 两台双核计算机,make -j4:16分40秒

  • 两台双核计算机,make -j8:15分49秒

跟最开始用一台双核时的23分钟相比,还是快了不少的。如果有更多的计算机加入,也可以得到更好的效果。

在编译过程中可以用distccmon-text来查看编译任务的分配情况。distcc也可以与ccache同时使用,通过设置一个环境变量就可以做到,非常方便。

总结

  • tmpfs: 解决IO瓶颈,充分利用本机内存资源

  • make -j: 充分利用本机计算资源

  • distcc: 利用多台计算机资源

  • ccache: 减少重复编译相同代码的时间

这些工具的好处都在于布署的成本相对较低,综合利用这些工具,就可以轻轻松松的节省相当可观的时间。

上面介绍的都是这些工具最基本的用法,更多的用法可以参考它们各自的man page。

一个基于虚幻4群聚鱼群AI插件

Posted on 10-18-2016 | In GitHub

A fish flock AI Plugin for Unreal Engine 4
一个基于虚幻4的鱼群 AI 插件

this Plugin version can Run 2000+ fishes at the same time
这个插件版本可以同时运行 2000+ 条鱼儿

源码已放到fish

Video Preview 视频演示



Download

MyFish.exe (Win64)

. . .

XXTEA的python实现

Posted on 09-13-2016 | In Misc

在数据的加解密领域,算法分为对称密钥与非对称密钥两种。

对称密钥与非对称密钥由于各自的特点,所应用的领域是不尽相同的。

对称密钥加密算法由于其速度快,一般用于整体数据的加密,而非对称密钥加密算法的安全性能佳,在数字签名领域得到广泛的应用。

微型加密算法(TEA)及其相关变种(XTEA,Block TEA,XXTEA) 都是分组加密算法,它们很容易被描述,实现也很简单(典型的几行代码)。

TEA是Tiny Encryption Algorithm的缩写,以加密解密速度快,实现简单著称。

TEA 算法最初是由剑桥计算机实验室的 David Wheeler 和 Roger Needham 在 1994 年设计的。

该算法使用 128 位的密钥为 64 位的信息块进行加密,它需要进行 64 轮迭代,尽管作者认为 32 轮已经足够了。

该算法使用了一个神秘常数δ作为倍数,它来源于黄金比率,以保证每一轮加密都不相同。

但δ的精确值似乎并不重要,这里 TEA 把它定义为 δ=「(√5 - 1)231」(也就是程序中的 0×9E3779B9)。

之后 TEA 算法被发现存在缺陷,作为回应,设计者提出了一个 TEA 的升级版本——XTEA(有时也被称为“tean”)。

XTEA 跟 TEA 使用了相同的简单运算,但它采用了截然不同的顺序,为了阻止密钥表攻击,四个子密钥(在加密过程中,原 128 位的密钥被拆分为 4 个 32 位的子密钥)采用了一种不太正规的方式进行混合,但速度更慢了。

在跟描述 XTEA 算法的同一份报告中,还介绍了另外一种被称为 Block TEA 算法的变种,它可以对 32 位大小任意倍数的变量块进行操作。

该算法将 XTEA 轮循函数依次应用于块中的每个字,并且将它附加于它的邻字。

该操作重复多少轮依赖于块的大小,但至少需要 6 轮。

该方法的优势在于它无需操作模式(CBC,OFB,CFB 等),密钥可直接用于信息。

对于长的信息它可能比 XTEA 更有效率。

在 1998 年,Markku-Juhani Saarinen 给出了一个可有效攻击 Block TEA 算法的代码,但之后很快 David J. Wheeler 和 Roger M. Needham 就给出了 Block TEA 算法的修订版,这个算法被称为 XXTEA。

XXTEA 使用跟 Block TEA 相似的结构,但在处理块中每个字时利用了相邻字。

它利用一个更复杂的 MX 函数代替了 XTEA 轮循函数,MX 使用 2 个输入量。

XXTEA 算法很安全,而且非常快速,非常适合应用于 Web 开发中。

TEA算法是由剑桥大学计算机实验室的David Wheeler和Roger Needham于1994年发明,

TEA是Tiny Encryption Algorithm的缩写,以加密解密速度快,实现简单著称。

TEA算法每一次可以操作64bit(8byte),采用128bit(16byte)作为key,算法采用迭代的形式,推荐的迭代轮数是64轮,最少32轮。

为解决TEA算法密钥表攻击的问题,TEA算法先后经历了几次改进,从XTEA到BLOCK TEA,直至最新的XXTEA。

XTEA也称做TEAN,它使用与TEA相同的简单运算,但四个子密钥采取不正规的方式进行混合以阻止密钥表攻击。

Block TEA算法可以对32位的任意整数倍长度的变量块进行加解密的操作,该算法将XTEA轮循函数依次应用于块中的每个字,并且将它附加于被应用字的邻字。

XXTEA使用跟Block TEA相似的结构,但在处理块中每个字时利用了相邻字,且用拥有两个输入量的MX函数代替了XTEA轮循函数。


import struct

_DELTA = 0x9E3779B9

def _long2str(v, w):
n = (len(v) - 1) << 2
if w:
m = v[-1]
if (m < n - 3) or (m > n): return ''
n = m
s = struct.pack('<%iL' % len(v), *v)
return s[0:n] if w else s

def _str2long(s, w):
n = len(s)
m = (4 - (n & 3) & 3) + n
s = s.ljust(m, "\0")
v = list(struct.unpack('<%iL' % (m >> 2), s))
if w: v.append(n)
return v

def encrypt(str, key):
if str == '': return str
v = _str2long(str, True)
k = _str2long(key.ljust(16, "\0"), False)
n = len(v) - 1
z = v[n]
y = v[0]
sum = 0
q = 6 + 52 // (n + 1)
while q > 0:
sum = (sum + _DELTA) & 0xffffffff
e = sum >> 2 & 3
for p in xrange(n):
y = v[p + 1]
v[p] = (v[p] + ((z >> 5 ^ y << 2) + (y >> 3 ^ z << 4) ^ (sum ^ y) + (k[p & 3 ^ e] ^ z))) & 0xffffffff
z = v[p]
y = v[0]
v[n] = (v[n] + ((z >> 5 ^ y << 2) + (y >> 3 ^ z << 4) ^ (sum ^ y) + (k[n & 3 ^ e] ^ z))) & 0xffffffff
z = v[n]
q -= 1
return _long2str(v, False)

def decrypt(str, key):
if str == '': return str
v = _str2long(str, False)
k = _str2long(key.ljust(16, "\0"), False)
n = len(v) - 1
z = v[n]
y = v[0]
q = 6 + 52 // (n + 1)
sum = (q * _DELTA) & 0xffffffff
while (sum != 0):
e = sum >> 2 & 3
for p in xrange(n, 0, -1):
z = v[p - 1]
v[p] = (v[p] - ((z >> 5 ^ y << 2) + (y >> 3 ^ z << 4) ^ (sum ^ y) + (k[p & 3 ^ e] ^ z))) & 0xffffffff
y = v[p]
z = v[n]
v[0] = (v[0] - ((z >> 5 ^ y << 2) + (y >> 3 ^ z << 4) ^ (sum ^ y) + (k[0 & 3 ^ e] ^ z))) & 0xffffffff
y = v[0]
sum = (sum - _DELTA) & 0xffffffff
return _long2str(v, True)

if __name__ == "__main__":
print decrypt(encrypt('Hello XXTEA!', '16bytelongstring'), '16bytelongstring')

内零头和外零头

Posted on 09-12-2016 | In Misc

问题:

在内存管理中,“内零头”和“外零头”个指的是什么?在固定式分区分配、可变式分区分配、页式虚拟存储系统、段式虚拟存储系统中,各会存在何种零头?为什么?

解答:

在存储管理中,

内零头是指分配给作业的存储空间中未被利用的部分,

外零头是指系统中无法利用的小存储块。

    1. 在固定式分区分配中,为将一个用户作业装入内存,内存分配程序从系统分区表中找出一个能满足作业要求的空闲分区分配给作业,由于一个作业的大小并不一定与分区大小相等,因此,分区中有一部分存储空间浪费掉了。

由此可知,固定式分区分配中存在内零头。

    1. 在可变式分区分配中,为把一个作业装入内存,应按照一定的分配算法从系统中找出一个能满足作业需求的空闲分区分配给作业,如果这个空闲分区的容量比作业申 请的空间容量要大,则将该分区一分为二,一部分分配给作业,剩下的部分仍然留作系统的空闲分区。

由此可知,可变式分区分配中存在外零头。

    1. 在页式虚拟存储系统中,用户作业的地址空间被划分成若干大小相等的页面,存储空间也分成也页大小相等的物理块,但一般情况下,作业的大小不可能都是物理块大小的整数倍,因此作业的最后一页中仍有部分空间被浪费掉了。

由此可知,页式虚拟存储系统中存在内零头。

    1. 在段式虚拟存储系统中,作业的地址空间由若干个逻辑分段组成,每段分配一个连续的内存区,但各段之间不要求连续,其内存的分配方式类似于动态分区分配。

由此可知,段式虚拟存储系统中存在外零头。

详细解释

操作系统在分配内存时,有时候会产生一些空闲但是无法被正常使用的内存区域,这些就是内存碎片,或者称为内存零头,这些内存零头一共分为两类:内零头和外零头。

  • 内零头是指进程在向操作系统请求内存分配时,系统满足了进程所需要的内存需求后,还额外还多分了一些内存给该进程,也就是说额外多出来的这部分内存归该进程所有,其他进程是无法访问的。

  • 外零头是指内存中存在着一些空闲的内存区域,这些内存区域虽然不归任何进程所有,但是因为内存区域太小,无法满足其他进程所申请的内存大小而形成的内存零头。

页式存储管理的情况

页式存储管理是以页为单位(页面的大小由系统确定,且大小是固定的)向进程分配内存的,

例如:假设内存总共有100K,分为10页,每页大小为10K。
现在进程A提出申请56K内存,因为页式存储管理是以页为单位进程内存分配的,所以系统会向进程A提供6个页面,也就是60K的内存空间,那么在最后一页中进程只使用了6K,从而多出了4K的内存碎片,但是这4K的内存碎片系统已经分配给进程A了,其他进程是无法再访问这些内存区域的,

这种内存碎片就是内零头。

段式存储管理的情况

段式存储管理是段(段的大小是程序逻辑确定,且大小不是固定的)为单位向进程进行内存分配的,进程申请多少内存,系统就给进程分配多少内存,这样就不会产生内零头,但是段式分配会产生外零头。

例如:假设内存总的大小为100K,现在进程A向系统申请60K的内存,系统在满足了进程A的内存申请要求后,还剩下40K的空闲内存区域;这时如果进程B向系统申请50K的内存区域,而系统只剩下了40K的内存区域,虽然这40K的内存区域不归任何进程所有,但是因为大小无法满足进程B的要求,所以也无法分配给进程B,这样就产生了外零头。

请求段式存储管理是在段式存储管理的基础上增加了请求调段功能和段置换功能。
所以段式和请求段式存储管理会产生外零头

练习题

下面的内存管理模式中,会产生外零头的是(正确答案B, D)

A、页式
B、段式
C、请求页式
D、请求段式

1…131415161718192021222324252627282930313233…37
Mike

Mike

🚙 🚗 💨 💨 If you want to create a blog like this, just follow my open-source project, "hexo-theme-neo", click the GitHub button below and check it out ^_^ . It is recommended to use Chrome, Safari, or Edge to read this blog since this blog was developed on Edge (Chromium kernel version) and tested on Safari.

11 categories
291 posts
111 tags
about
GitHub Spotify
© 2013 - 2025 Mike