“看到没,当前的限制是1024~65000,所以你就只能有63977个端口号可以使用。”我赶紧像老操道歉,”哎哟真是抱歉,还是我见识太少,那这个数可以修改么?”老操也没跟我一般见识,还是耐心地回答我,”可以的,具体可以 vim /etc/sysctl.conf 这个文件进行修改,我们在这个文件里添加一行记录”
net.ipv4.ip_local_port_range = 60000 60009
“保存好后执行 sysctl -p /etc/sysctl.conf 使其生效。这样你就只有 10 个端口号可以用了,就会更快报出端口号不够用的错误”“原来如此,谢谢老操又给我上了一课。”哎不对,建立一个 TCP 连接,需要将通信两端的套接字(socket)进行绑定,如下:源 IP 地址:源端口号 <—-> 目标 IP 地址:目标端口号只要这套绑定关系构成的四元组不重复即可,刚刚端口号不够用了,是因为我一直对同一个目标IP和端口建立连接,那我换一个目标端口号试试。我又把这个表交给老操,老操一眼就看破了我的小心思,可是也没办法,马上去给我建立了一个新的TCP连接,并且成功返回给我一个新的文件描述符纸条。看来成功了,只要源端口号不用够用了,就不断变换目标 IP 和目标端口号,保证四元组不重复,我就能创建好多好多 TCP 连接啦!这也证明了有人说最多只能创建 65535 个TCP连接是多么荒唐。文件描述符找到了突破端口号限制的办法,我不断找老操建立TCP连接,老操也拿我没有办法。直到有一次,我又收到了一张特殊的纸条,上面写的不是文件描述符。我又没好气地问老操,”这又是咋回事?”老操幸灾乐祸地告诉我,”呵呵,你小子以为突破端口号限制就无法无天了?现在文件描述符不够用啦!”“怎么啥啥都有限制啊?你们操作系统给我们的限制也太多了吧?”“废话,你看看你都建了多少个TCP连接了!每建立一个TCP连接,我就得分配给你一个文件描述符,linux 对可打开的文件描述符的数量分别作了三个方面的限制。”
You might be familiar with Linux load averages already. Load averages are the three numbers shown with the uptime and top commands – they look like this:
load average: 0.09, 0.05, 0.01
Most people have an inkling of what the load averages mean: the three numbers represent averages over progressively longer periods of time (one, five, and fifteen-minute averages), and that lower numbers are better. Higher numbers represent a problem or an overloaded machine. But, what’s the threshold? What constitutes “good” and “bad” load average values? When should you be concerned over a load average value, and when should you scramble to fix it ASAP?
First, a little background on what the load average values mean. We’ll start out with the simplest case: a machine with one single-core processor.
The traffic analogy
A single-core CPU is like a single lane of traffic. Imagine you are a bridge operator … sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time. If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they’re in for delays.
So, Bridge Operator, what numbering system are you going to use? How about:
0.00 means there’s no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there’s no backup, and an arriving car will just go right on.
1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
over 1.00 means there’s backup. How much? Well, 2.00 means that there are two lanes worth of cars total — one lane’s worth on the bridge, and one lane’s worth waiting. 3.00 means there are three lanes worth total — one lane’s worth on the bridge, and two lanes’ worth waiting. Etc.
This is basically what CPU load is. “Cars” are processes using a slice of CPU time (“crossing the bridge”) or queued up to use the CPU. Unix refers to this as the run-queue length: the sum of the number of processes that are currently running plus the number that are waiting (queued) to run.
Like the bridge operator, you’d like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also, like the bridge operator, you are still ok if you get some temporary spikes above 1.00 … but when you’re consistently above 1.00, you need to worry.
So you’re saying the ideal load is 1.00?
Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:
The “Need to Look into it” Rule of Thumb: 0.70 If your load average is staying above > 0.70, it’s time to investigate before things get worse.
The “Fix this now” Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you’re going to get woken up in the middle of the night, and it’s not going to be fun.
The “Arrgh, it’s 3 AM WTF?” Rule of Thumb: 5.0. If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night or when you’re presenting at a conference. Don’t let it get there.
What about Multi-processors? My load says 3.00, but things are running fine!
Got a quad-processor system? It’s still healthy with a load of 3.00.
On a multi-processor system, the load is relative to the number of processor cores available. The “100% utilization” mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.
If we go back to the bridge analogy, the “1.00” really means “one lane’s worth of traffic”. On a one-lane bridge, that means it’s filled up. On a two-lane bridge, a load of 1.00 means it’s at 50% capacity — only one lane is full, so there’s another whole lane that can be filled.
Same with CPUs: a load of 1.00 is 100% CPU utilization on a single-core box. On a dual-core box, a load of 2.00 is 100% CPU utilization.
Multicore vs. multiprocessor
While we’re on the topic, let’s talk about multicore vs. multiprocessor. For performance purposes, is a machine with a single dual-core processor basically equivalent to a machine with two processors with one core each? Yes. Roughly. There are lots of subtleties here concerning amount of cache, frequency of process hand-offs between processors, etc. Despite those finer points, for the purposes of sizing up the CPU load value, the total number of cores is what matters, regardless of how many physical processors those cores are spread across.
Which leads us to two new Rules of Thumb:
The “number of cores = max load” Rule of Thumb: on a multicore system, your load should not exceed the number of cores available.
The “cores is cores” Rule of Thumb: How the cores are spread out over CPUs doesn’t matter. Two quad-cores == four dual-cores == eight single-cores. It’s all eight cores for these purposes.
Bringing It Home
Let’s take a look at the load averages output from uptime:
This is on a dual-core CPU, so we’ve got lots of headroom. I won’t even think about it until load gets and stays above 1.7 or so.
Now, what about those three numbers? 0.65 is the average over the last minute, 0.42 is the average over the last five minutes, and 0.36 is the average over the last 15 minutes. Which brings us to the question:
Which average should I be observing? One, five, or 15 minutes?
For the numbers we’ve talked about (1.00 = fix it now, etc), you should be looking at the five or 15-minute averages. Frankly, if your box spikes above 1.0 on the one-minute average, you’re still fine. It’s when the 15-minute average goes north of 1.0 and stays there that you need to snap to. (obviously, as we’ve learned, adjust these numbers to the number of processor cores your system has).
So # of cores is important to interpreting load averages … how do I know how many cores my system has?
cat /proc/cpuinfo to get info on each processor in your system. Note: not available on OSX, Google for alternatives. To get just a count, run it through grep and word count: grep 'model name' /proc/cpuinfo | wc -l
More servers? Or faster code?
Adding servers can be a band-aid for slow code. Scout APM helps you find and fix your inefficient and costly code. We automatically identify N+1 SQL calls, memory bloat, and other code-related issues so you can spend less time debugging and more time programming.