分享一个实用的快速创建SSH Tunnel并通过跳板机访问网页的脚本
由 mcsrainbow 发表在 Linux&Unix 分类,时间 2017/03/10
背景介绍:
目前,在线上环境中,大家需要在家里远程访问很多受限的服务器页面,而这些服务器是通过自动伸缩创建的,IP地址变化很频繁,因此很难通过在VPN中添加删除静态路由的方式让大家访问。
因此,我一直通过创建SSH Tunnel的方式,利用一台固定的白名单主机去访问那些服务器。在此,将我的脚本分享给大家。
脚本地址:
https://github.com/mcsrainbow/shell-scripts/blob/master/scripts/create_tunnel.sh
通过一道奇葩面试题来体验下awk的强大
由 mcsrainbow 发表在 Linux&Unix, Programming 分类,时间 2016/09/20
面试题内容:
需求:以”|”为分隔符,打印第一字段第一个字符是1,第二个字段第二个字符是2,第三个字段第三个字符是3,以此类推;倒数第二个字段前8个字符是当天日期如“20140610”。
源文件raw_data.txt:
1 2 3 4 |
1|12|a7f865ce-b274-4b23-890c-893c7d1f2198||||2016082055104 3|22|bd166d4f-5222-4d69-a277-deb543db1a9d||||20141117012936| 1|a2af|1135ea2-067a-4d4c-b56f|01442332|308g5dfg|955226r9|2016092037|20150428102737 1|222|1f3f1950-6b0e-4459-a1ee|sad4sadf|adsa5dfd|7746765|2016092002| |
$ date +%Y%m%d
1 |
20160920 |
我用Python实现的代码:
$ ./check_string.py
1 |
Matched: 1|a2af|1135ea2-067a-4d4c-b56f|01442332|308g5dfg|955226r9|2016092037|20150428102737 |
#!/usr/bin/env python import datetime def check_item(string): item_list = string.split('|') for item_id in range(0,len(item_list)): try: item_sub_list = list(item_list[item_id]) except IndexError: return False special_item_id = len(item_list) - 2 expect_item_sub_value = item_id + 1 if item_id != special_item_id: try: if not item_sub_list[item_id] != expect_item_sub_value: return False except IndexError: return False else: if not datetime.datetime.now().strftime("%Y%m%d") == ''.join(item_sub_list[0:8]): return False return True if __name__=='__main__': filename = 'raw_data.txt' with open(filename) as fp: for line in fp: if check_item(line.replace('\n','')): print "Matched: {0}".format(line.replace('\n',''))
网友“运维@苏东”用awk实现的代码:
$ cat raw_data.txt | awk -F\| ‘BEGIN{d=strftime(“%Y%m%d”)} { i=1;j=NF-1;k=0;while(i<j-1){if ( substr($i,i,1) == i ){k++;}; i++}; if (k==i-1 && substr($j,1,8) == d && substr($NF,NF,1) == NF) {print $0} }’
1 |
1|a2af|1135ea2-067a-4d4c-b56f|01442332|308g5dfg|955226r9|2016092037|20150428102737 |
分享一个Oozie Job Debug脚本
由 mcsrainbow 发表在 Linux&Unix 分类,时间 2016/05/24
参考资料:
https://oozie.apache.org/docs/4.0.0/WebServicesAPI.html
背景介绍:
在我们的线上Hadoop集群中,采用了Oozie来作为Workflow的管理,而平时有不少Workflow在执行过程中会因为各种问题而失败。
于是,我们通常都会通过Oozie Web Console去Troubleshooting,但是整个过程并不方便,在研究了Oozie API之后,我写了一个脚本来自动化的帮我们完成绝大部分的Troubleshooting步骤。
具体配置:
整个脚本需要模拟的Troubleshooting思路如下:
1. 获取整个Workflow所有步骤的信息,通常的状态有:OK,RUNNING,FAILED,KILLED,ERROR
2. 对FAILED,KILLED,ERROR状态的步骤,首先获取其consoleUrl,然后进一步获取更有价值的logsLinks,同时打印相关的调试信息,并导出该步骤的相关XML配置文件
脚本地址:https://github.com/mcsrainbow/python-demos/blob/master/demos/debug_oozie_job.py
执行示例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
[dong@idc1-server1 ~]$ debug_oozie_job.py --server idc1-hive1 --job_id 0011096-160121234010195-oozie-oozi-C@2387 ################################## externalId: 0061222-160121234010195-oozie-oozi-W status: 'OK', name: 'fork-1' status: 'OK', name: ':start:' status: 'OK', name: 'check-point' status: 'OK', name: 'daily-decision' status: 'OK', name: 'extract-labelpair-profiles' status: 'OK', name: 'extract-web-profiles' status: 'ERROR', name: 'extract-nobid-profiles' consoleUrl: 'http://idc1-rm1.heylinux.com:8100/proxy/application_1458783473169_227279' logsLinks: http://idc1-rm1.heylinux.com:19888/jobhistory/logs/idc1-node1.heylinux.com:43483/container_1458783473169_227279_01_000002/attempt_1458783473169_227279_m_000000_0/oozie *DEBUG*: status: 'ERROR' retries: '0' transition: 'email-error' stats: 'None' startTime: 'Tue, 24 May 2016 02:50:19 GMT' toString: 'Action name[extract-nobid-profiles] status[ERROR]' cred: 'null' errorMessage: 'None' errorCode: 'None' consoleUrl: 'http://idc1-rm1.heylinux.com:8100/proxy/application_1458783473169_227279' externalId: 'job_1458783473169_227279' externalStatus: 'FAILED/KILLED' conf: '/tmp/0011096-160121234010195-oozie-oozi-C@2387_extract-nobid-profiles.xml' type: 'map-reduce' trackerUri: 'idc1-rm1:8032' externalChildIDs: '' endTime: 'Tue, 24 May 2016 03:24:42 GMT' data: 'None' id: '0061222-160121234010195-oozie-oozi-W@extract-nobid-profiles' name: 'extract-nobid-profiles' status: 'OK', name: 'extract-data-profiles' status: 'OK', name: 'extract-optout-profiles' status: 'OK', name: 'fail' status: 'OK', name: 'email-error' ################################## Please check the URLs in "logsLinks" above for detailed informations. Do NOT ignore the messages in "Log Type: stdout". |
分享一个 RAID磁盘健康状态 监控脚本
由 mcsrainbow 发表在 Linux&Unix 分类,时间 2016/01/31
参考资料:
http://blog.irq1.com/megacli-commands-to-storcli-command-conversion/
https://github.com/mcsrainbow/shell-scripts/blob/master/scripts/MegaRAID_SUM
背景介绍:
在我们的线上环境中,有大量的物理实体服务器,主要用于对配置要求很高的Hadoop集群。
通常在这些服务器中,都配置了RAID卡并且挂载有16块大小至少为3T的硬盘,由于Hadoop集群的IO密集型特征,不少硬盘经常不堪重负而损坏,因此对RAID磁盘健康状态的检查,非常有必要。
具体配置:
整个脚本的思路如下:
1. 通过MegaCli64分别获取异常状态的信息,通常有Degrade,Offline,Critical,Failed等状态
2. 将获取到的异常状态汇总,并提取出有问题的磁盘槽位信息
脚本地址:https://github.com/mcsrainbow/shell-scripts/blob/master/scripts/check_megaraid_status
执行示例:
1 2 |
[root@idc1-server1 ~]# /usr/local/nagios/libexec/check_megaraid_status CRIT - Virtual Drives: {Degraded: 0, Offline: 2}, Physical Disks: {Critical: 0, Failed: 2}, Bad Drive: [{adapter: 0, enclID: 2, slot: 7, Span ref: 8, Row: 0}, {adapter: 0, enclID: 2, slot: 1, Span ref: 2, Row: 0}] |
运维工具汇总之 性能调优,性能监控,性能测试
由 mcsrainbow 发表在 Linux&Unix 分类,时间 2016/01/21
背景介绍:
关于运维工具,网上已经有前辈用了三张图表,将系统各个层面的性能调优,性能监控,性能测试都进行了总结。
我觉得非常有必要再学习一次,因此打算将这三张图贴到本文当中,并且在之后不断的完善,针对各个命令做一些简单的介绍,并跟上用例。
工具详解:
待续…
HAProxy RPM SPECS与HTTPS Load配置分享
由 mcsrainbow 发表在 Linux&Unix 分类,时间 2015/12/28
话不多说,具体内容如下:
haproxy-1.5.17.spec
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
Name: haproxy Version: 1.5.17 Release: el6 Summary: The Reliable, High Performance TCP/HTTP Load Balancer Group: System Environment/Daemons License: GPL URL: http://www.haproxy.org Source: haproxy-1.5.17.tar.gz Vendor: Willy Tarreau BuildRequires: gcc gcc-c++ autoconf automake cmake openssl openssl-devel pcre pcre-devel pcre-static Requires: pcre pcre-devel pcre-static openssl openssl-devel %description HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. %prep tar xzvf $RPM_SOURCE_DIR/haproxy-1.5.17.tar.gz %build cd haproxy-1.5.17/ make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 %install rm -rf $RPM_BUILD_ROOT cd haproxy-1.5.17/ make install DESTDIR=$RPM_BUILD_ROOT mkdir -p $RPM_BUILD_ROOT/etc/init.d cp examples/haproxy.init $RPM_BUILD_ROOT/etc/init.d/haproxy chmod 755 $RPM_BUILD_ROOT/etc/init.d/haproxy mkdir -p $RPM_BUILD_ROOT/etc/haproxy cp examples/examples.cfg $RPM_BUILD_ROOT/etc/haproxy/haproxy.cfg mkdir -p $RPM_BUILD_ROOT/var/lib/haproxy touch $RPM_BUILD_ROOT/var/lib/haproxy/stats %clean rm -rf $RPM_BUILD_DIR/haproxy-1.5.17 %preun rm -f /usr/sbin/haproxy %postun userdel haproxy %files /etc/haproxy /etc/init.d/haproxy /usr/local/doc/haproxy /usr/local/sbin/haproxy /usr/local/share/man/man1/haproxy.1 /var/lib/haproxy %post useradd haproxy -M -d /var/lib/haproxy ln -sf /usr/local/sbin/haproxy /usr/sbin/haproxy %changelog |
haproxy.cfg
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
global # /etc/sysconfig/syslog # local2.* /var/log/haproxy.log log 127.0.0.1 local2 notice maxconn 100000 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats level admin stats bind-process 1 nbproc 6 debug # default ciphers to use on SSL-enabled listening sockets ssl-default-bind-ciphers ALL:!SSLv2:!SSLv3:!LOW:!EXP:!MD5:!aNULL:!eNULL # fix the Logjam issue tune.ssl.default-dh-param 2048 defaults mode http log global option httplog option forwardfor except 127.0.0.0/8 option dontlognull option abortonclose option redispatch retries 3 timeout http-request 30s timeout queue 30s timeout connect 30s timeout client 30s timeout server 30s timeout http-keep-alive 30s timeout check 5s maxconn 100000 listen stats 0.0.0.0:9000 stats uri /haproxy_stats stats hide-version frontend http-in bind 0.0.0.0:80 default_backend webapp-http frontend https-in bind 0.0.0.0:443 ssl crt /etc/haproxy/star.heylinux.com.pem reqadd X-Forwarded-Proto:\ https reqadd X-SSL-Secure:\ true option forwardfor default_backend webapp-http backend webapp-http mode http option httplog option forwardfor except 127.0.0.0/8 balance leastconn cookie JSESSIONID prefix option httpchk HEAD /keepalive.html HTTP/1.0 # health check file server webapp1 10.192.1.11:80 cookie webapp1 check maxconn 5000 weight 2 server webapp2 10.192.1.12:80 cookie webapp2 check maxconn 5000 weight 2 server webapp3 10.192.1.13:80 cookie webapp3 check maxconn 5000 weight 2 server webapp4 10.192.1.14:80 cookie webapp4 check maxconn 5000 weight 2 server webapp5 10.192.1.15:80 cookie webapp5 check maxconn 5000 weight 2 |
给rm命令加上回收站功能
由 mcsrainbow 发表在 Linux&Unix 分类,时间 2015/11/15
背景:
在群里,总会有人聊到曾经做过的最坑的事情,其中当然少不了rm命令,比如最出名的rm -rf /*命令。
受HDFS回收站机制的启发,我即兴的写了一个shell脚本来实现类似的功能。
具体配置:
[dong@localhost ~]$ sudo touch /usr/bin/delete
[dong@localhost ~]$ sudo chmod +x /usr/bin/delete
[dong@localhost ~]$ sudo vim /usr/bin/delete
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
#!/bin/bash trash_dir=${HOME}/.Trash/$(date +%Y%m%d%H%M%S) function move_item(){ item=$1 full_path=$2 full_dir=$(dirname ${full_path}) mkdir -p ${trash_dir}${full_dir} mv ${item} ${trash_dir}${full_path} if [[ $? -eq 0 ]]; then echo "Moved ${item} to ${trash_dir}${full_path}" fi } if [[ $# -eq 0 ]] || $(echo "$1" |grep -Ewq '\-h|\-\-help'); then echo "${0} [-f] [*|FILE]" exit 2 fi for item in $@; do if $(echo ${item} |grep -vq '^-'); then if $(echo ${item} |grep -q '^/'); then full_path=${item} else full_path=$(pwd)/${item} fi if $(echo $@ |grep -Ewq '\-f|\-rf|\-fr'); then move_item ${item} ${full_path} else echo -n "Move ${item} to ${trash_dir}${full_path}? [y/n] " read yorn if $(echo ${yorn} |grep -Ewq 'y|Y|yes|YES'); then move_item ${item} ${full_path} fi fi fi done |
[dong@localhost ~]$ mkdir tmp
[dong@localhost ~]$ cd tmp
[dong@localhost tmp]$ mkdir 1 2 3
[dong@localhost tmp]$ echo 1 > 1/1.txt
[dong@localhost tmp]$ echo 2 > 2/2.txt
[dong@localhost tmp]$ echo 3 > 3/3.txt
[dong@localhost tmp]$ touch a b c
[dong@localhost tmp]$ ln -s a d
[dong@localhost tmp]$ delete 1
Move 1 to /home/dong/.Trash/20160415114210/home/dong/tmp/1? [y/n] y Moved 1 to /home/dong/.Trash/20160415114210/home/dong/tmp/1
[dong@localhost tmp]$ delete -f *
Moved 2 to /home/dong/.Trash/20160415114217/home/dong/tmp/2 Moved 3 to /home/dong/.Trash/20160415114217/home/dong/tmp/3 Moved a to /home/dong/.Trash/20160415114217/home/dong/tmp/a Moved b to /home/dong/.Trash/20160415114217/home/dong/tmp/b Moved c to /home/dong/.Trash/20160415114217/home/dong/tmp/c Moved d to /home/dong/.Trash/20160415114217/home/dong/tmp/d
在CentOS 6上安装部署Graphite
由 mcsrainbow 发表在 Linux&Unix 分类,时间 2015/10/29
参考资料:
http://centoshowtos.org/monitoring/graphite/
背景介绍:
通常,我们会将比较重要的指标都纳入到监控系统中,并在监控系统中进行绘图。
但有时候,可能会需要临时的针对某些特定的数据进行分析并绘图,并且通常都是一堆历史数据,进行事后分析的。
比如,近期线上的日志传输系统,在某些节点上传输的比较慢,那么我们就想分析一下哪些时段的日志比较慢,就从历史记录中取出了在这些节点上近4天所有日志的传输细节,包括日志大小,传输时间等;然后,通过Graphite,非常方便的导入了这些数据,并绘图分析。
具体配置:
环境介绍:
OS: CentOS6.5 x86_64 Minimal
1. 安装EPEL扩展库
# yum install -y epel-release
# sed -i s/#baseurl=/baseurl=/g /etc/yum.repos.d/epel.repo
# sed -i s/mirrorlist=/#mirrorlist=/g /etc/yum.repos.d/epel.repo
# yum clean all
2. 安装系统所需套件
yum install -y python python-devel python-pip
yum groupinstall -y ‘Development Tools’
3. 安装配置Graphite相关软件(MySQL部分可以分开配置,使用独立的服务器)
# yum install -y graphite-web graphite-web-selinux mysql mysql-server MySQL-python
# mysql_secure_installation
Set root password? [Y/n] Y New password: graphite Re-enter new password: graphite Remove anonymous users? [Y/n] Y Disallow root login remotely? [Y/n] Y Remove test database and access to it? [Y/n] Y Reload privilege tables now? [Y/n] Y
# mysql -uroot -pgraphite
mysql> CREATE DATABASE graphite; mysql> GRANT ALL PRIVILEGES ON graphite.* TO 'graphite'@'localhost' IDENTIFIED BY 'graphite'; mysql> FLUSH PRIVILEGES; mysql> exit;
# vi /etc/graphite-web/local_settings.py
DATABASES = { 'default': { 'NAME': 'graphite', 'ENGINE': 'django.db.backends.mysql', 'USER': 'graphite', 'PASSWORD': 'graphite', } }
# /usr/lib/python2.6/site-packages/graphite/manage.py syncdb
Would you like to create one now? (yes/no): yes Username (leave blank to use 'root'): root E-mail address: guosuiyu@gmail.com Password: graphite Password (again): graphite
4. 启动Apache服务,作为Graphite的Web服务器
# /etc/init.d/httpd start
5. 安装底层的绘图与数据采集软件
# yum install -y python-carbon python-whisper
6. 启动数据采集进程
# /etc/init.d/carbon-cache start
7. 更新配置,保留所有metrics目录下数据90天(默认仅保留1天,也就是说看不到1天以前的数据绘图)
# vi /etc/carbon/storage-schemas.conf
[carbon] priority = 101 pattern = ^carbon\. retentions = 60:90d [default_1min_for_90days] priority = 100 pattern = .* retentions = 60:90d
发送一些测试数据
# python /usr/share/doc/graphite-web-0.9.12/example-client.py
sending message -------------------------------------------------------------------------------- system.loadavg_1min 0.01 1446086849 system.loadavg_5min 0.03 1446086849 system.loadavg_15min 0.05 1446086849
8. 查看当前服务器进程
# netstat -lntp
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2131/httpd tcp 0 0 0.0.0.0:2003 0.0.0.0:* LISTEN 2210/python tcp 0 0 0.0.0.0:2004 0.0.0.0:* LISTEN 2210/python tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1566/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 976/master tcp 0 0 0.0.0.0:7002 0.0.0.0:* LISTEN 2210/python tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2063/mysqld
9. 生成24小时的模拟数据,体验Graphite的绘图功能
安装nc命令
# yum install -y nc
创建生成模拟数据的脚本
# vi feed_random_data.sh
#!/bin/bash # # Generate random pageview data and feed Graphite tree_path=metrics.random.pageview time_period_hours=24 now_timestamp=$(date +%s) minutes_number=$((${time_period_hours}*60)) echo ${minutes_number} timestamp=$((${now_timestamp}-${minutes_number}*60)) for i in $(seq 1 ${minutes_number}); do echo "echo ${tree_path} $(($RANDOM%5000)) ${timestamp} | nc localhost 2003" echo ${tree_path} $(($RANDOM%5000)) ${timestamp} | nc localhost 2003 let timestamp+=60 done
执行脚本,将数据喂给Graphite,在使用nc命令的时候固定格式为:
echo 目录结构 数值 时间戳 | nc 服务器地址 服务端口
例如:
echo metrics.random.pageview 3680 1446095415 | nc localhost 2003
# chmod +x feed_random_data.sh
# ./feed_random_data.sh
当然,也可以参考上面的example-client.py脚本,使用Python的方式来喂数据。
然后,打开Graphite Web,即可看到如下所示的绘图:
使用账号root/graphite登陆以后,还可以创建一个Dashboard,将多个绘图放在一起方便查看:
Graphite还支持通过API生成图片,方便我们获取,如下所示:
API URL:http://graphite.readthedocs.org/en/latest/render_api.html
推荐一个学英语的很棒的方法
由 mcsrainbow 发表在 English, Linux&Unix 分类,时间 2015/10/27
学好英语,不仅仅是能考试,更重要的是,听力,和口语表达。
这里,我推荐给大家一个学英语的很棒的方法,以前外教教过的,亲身体验很爽,妙不可言,很多英语本身才有的幽默都可以体会到。
具体如下:
1. 在网上搜索一部英文电影的剧本
2. 阅读剧本并将所有的生词都学会
3. 找到这部电影
4. 去掉电影字幕(包括英文字幕)来欣赏
使用 iperf 检测主机间网络带宽
由 mcsrainbow 发表在 Linux&Unix, Network 分类,时间 2015/10/12
参考资料:
https://blogs.oracle.com/mandalika/entry/measuring_network_bandwidth_using_iperf
背景介绍:
在调试网络时,经常需要检测两台主机间的最大带宽,我一直使用iperf命令,效果很好很准确,但发现有一些运维朋友并不知道有这个工具,于是打算写篇文章简单介绍一下。
具体操作:
操作系统:CentOS6.5 x86_64 Minimal
服务器:
192.168.10.11
192.168.10.12
[root@192.168.10.11 ~]# yum install http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
[root@192.168.10.11 ~]# yum install iperf
[root@192.168.10.12 ~]# yum install http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
[root@192.168.10.12 ~]# yum install iperf
[root@192.168.10.12 ~]# iperf -s
————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
[root@192.168.10.11 ~]# iperf -c 192.168.10.12
————————————————————
Client connecting to 192.168.10.12, TCP port 5001
TCP window size: 64.0 KByte (default)
————————————————————
[ 3] local 192.168.10.11 port 23351 connected with 192.168.10.12 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 744 MBytes 624 Mbits/sec
[root@192.168.10.12 ~]# iperf -s
————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
[ 4] local 192.168.10.12 port 5001 connected with 192.168.10.11 port 23351
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 744 MBytes 623 Mbits/sec
近期评论(Recent Comments)
问题找到啦,非常感谢
感谢提供解决问题的思路。我的情况是因为文件有损坏,使用hotcopy 会出现“svnadmin: Can't open file '/SVN_PATH/db/revprops/24/24685'...
大神,您好。 你的博客 都是使用什么软件和主题搭建的哈?关注你的博客很久了。 也想自己搭建一个 总结 反思自己。谢谢大神...
int result = 0; for (int i = 0; i < 101; i++) { result ^= data[i]; ...
如果确认所有的表都是INNODB引擎,没有任何MyISAM表,还可以加上--no-lock参数。...
讲的不错, mark
答案无疑是本地端口转发了,它的命令格式是: ssh -L :: 原来是这个原理...