There have been some pretty major IO performance boosts in the past few kernel releases, in some cases you might see 25% more random read or write IOPS if you are using an SSD.
Kernel 3.17 SCSI Multi-Queue - http://www.phoronix.com/scan.php?page=news_item&px=MTcyMjk Kernel 3.19 multi-queue block layer (blk-mq) - http://www.phoronix.com/scan.php?page=news_item&px=MTg2MjgIf you're running Ubuntu with a kernel that's older than 3.19, and you have an SSD, you are probably only getting 50% - 75% of it's potential speed. I'm not joking, the performance gains are huge between Kernel 3.13 and 3.19 even for Ubuntu. CentOS 7 / RHEL just got to Kernel 3.10 which sucks slightly less than CentOS 6 2.6.something, which is just depressing when it comes to SSD IO performance.
./configure --prefix=/opt --target-list=x86_64-softmmu --enable-linux-aio \ --enable-numa --enable-spice --enable-kvm --enable-lzo --enable-snappy \ --enable-libusb --enable-usb-redir --enable-libiscsi --enable-mc --enable-rdma \ --disable-libnfs --disable-seccomp --disable-smartcard-nss --disable-fdt --disable-curl \ --disable-curses --disable-sdl --disable-gtk --disable-tpm --disable-vte --disable-xen \ --disable-cap-ng
yum install -y numactl lzo snappy pixman celt051
net.core.netdev_max_backlog = 262144 net.ipv4.tcp_sack = 0 net.ipv4.tcp_dsack = 0 net.ipv4.tcp_rmem = 8192 87380 6291456 net.ipv4.tcp_wmem = 8192 87380 6291456 net.ipv4.tcp_mem = 786432 1048576 1572864 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_app_win = 40 net.ipv4.tcp_early_retrans = 1
vm.swappiness=0
grep -E "ept|vpid" /proc/cpuinfo cat /sys/module/kvm_intel/parameters/ept cat /sys/module/kvm_intel/parameters/vpid
modprobe kvm_intel ept=1,vpid=1
grep Hugepagesize /proc/meminfo mount -t hugetlbfs hugetlbfs /dev/hugepages sysctl vm.nr_hugepages=1024
qemu-kvm –mem-path /dev/huagepages
服务器开启定时同步时间服务
虚拟机也可以设置定时时间同步任务
Usually both options are recommended for:
Important note from Red Hat: Direct Asynchronous IO (AIO) that is not issued on filesystem block boundaries, and falls into a hole in a sparse file on ext4 or xfs filesystems, may corrupt file data if multiple I/O operations modify the same filesystem block. Specifically, if qemu-kvm is used with the aio=native IO mode over a sparse device image hosted on the ext4 or xfs filesystem, guest filesystem corruption will occur if partitions are not aligned with the host filesystem block size. Generally, do not use aio=native option along with cache=none for QEMU. This issue can be avoided by using one of the following techniques:
* cache=writeback,aio=threads
* schelder = deadline/cfq
* acpi=off
* preallocation = metadata
尽量模拟多路多核多线程,让软件自动识别,maxcpus hotplug cpu有bug风险,暂时不建议使用
宿主机尽量保留1路1核给自有系统使用,使用isolcpus
-smp 4,sockets=2,cores=2,threads=1/2
ZFS + L2ARC and ZIL on ssd.
dm-Cache:
bcache
EnhanceIO
ZFS (zfsonlinux.org)
ZFS + Gluster + SSD's caches seems to be a winner for shared HA storage to me.
options zfs zfs_arc_max=40000000000 options zfs zfs_vdev_max_pending=24 Where zfs_arc_max is roughly 40% of your RAM in bytes (Edit: try zfs_arc_max=1200000000). The compiled-in default for zfs_vdev_max_pending is 8 or 10, depending on version. The value should be high (48) for SSD or low-latency drives. Maybe 12-24 for SAS. Otherwise, leave at default. You'll want to also have some floor values in /etc/sysctl.conf vm.swappiness = 10 vm.min_free_kbytes = 512000 Finally, with CentOS, you may want to install tuned and tuned-utils and set your profile to virtual-guest with tuned-adm profile virtual-guest. Try these and see if the problem persists. Edit: Run zfs set xattr=sa storage. Here's why. You may have to wipe the volumes and start again (I'd recommend).
#!/bin/sh echo "options zfs zfs_prefetch_disable=1" > /etc/modprobe.d/zfs.conf echo "options zfs l2arc_noprefetch=0" >> /etc/modprobe.d/zfs.conf awk '/MemTotal/{printf "options zfs zfs_arc_min=%.f\n",$2*1024*1/10}' /proc/meminfo >> /etc/modprobe.d/zfs.conf awk '/MemTotal/{printf "options zfs zfs_arc_max=%.f\n",$2*1024*3/10}' /proc/meminfo >> /etc/modprobe.d/zfs.conf [ -z "$1" ] && echo "$0 poolname" && exit 0 zpool create $1 -f -o ashift=12 raidz \ -O atime=off \ -O relatime=on \ -O compression=lz4 \ -O primarycache=all \ -O secondarycache=all \ -O logbias=throughput \ -O dedup=off \ -O casesensitivity=mixed /dev/sd[bcde] zpool add $1 -f cache sda3
mkfs.xfs -L /ssd1 -l internal,lazy-count=1,size=128m -i attr=2 -d agcount=8 -i size=512 -f /dev/sda4 mount -t xfs -o rw,noexec,nodev,noatime,nodiratime,barrier=0,logbufs=8,logbsize=256k /dev/sda4 /storage [root@vcn40 storage]# time dd if=/dev/zero of=2g bs=1M count=20000 20000+0 records in 20000+0 records out 20971520000 bytes (21 GB) copied, 31.1406 s, 673 MB/s real 0m31.143s user 0m0.010s sys 0m16.413s [root@vcn40 storage]# echo 3 > /proc/sys/vm/drop_caches [root@vcn40 storage]# time dd if=/dev/zero of=2g bs=1M count=20000 20000+0 records in 20000+0 records out 20971520000 bytes (21 GB) copied, 30.6331 s, 685 MB/s real 0m31.501s user 0m0.013s sys 0m16.881s
[root@vcn40 ~]# cd /storage [root@vcn40 storage]# time dd if=/dev/zero of=2g bs=1M count=20000 20000+0 records in 20000+0 records out 20971520000 bytes (21 GB) copied, 31.835 s, 659 MB/s real 0m31.837s user 0m0.010s sys 0m25.371s [root@vcn40 storage]# echo 3 > /proc/sys/vm/drop_caches [root@vcn40 storage]# time dd if=/dev/zero of=2g bs=1M count=20000 20000+0 records in 20000+0 records out 20971520000 bytes (21 GB) copied, 58.9783 s, 356 MB/s real 0m59.003s user 0m0.013s sys 0m27.882s
options bonding max_bonds=2 mode=4 miimon=100 downdelay=100 updelay=100 lacp_rate=1 use_carrier=1 xmit_hash_policy=layer3+4
yum install -y python-setuptools libjpeg-devel cyrus-sasl-devel openssl-devel celt051-devel alsa-lib-devel glib2-devel libXrandr-devel libXinerama-devel xorg-x11-server-devel gcc gcc-c++ autoconf automake
vi /usr/bin/startyc
#!/bin/sh cd /usr/local/bin/MiGateway/ ./ycc
vi /etc/rc.d/rc.local
/usr/bin/xinit /usr/bin/startyc
如果是使用systemd来管理启动进程 systemd
vi /etc/systemd/system/rc.local.service
[Unit] Description=/etc/rc.local Compatibility ConditionPathExists=/etc/rc.local [Service] Type=forking ExecStart=/etc/rc.local start TimeoutSec=0 StandardOutput=tty RemainAfterExit=yes SysVStartPriority=99 [Install] WantedBy=multi-user.target
systemd enable rc.local.service
mkusb.sh
#!/bin/sh [ -z $1 ] && echo "$0 /dev/sdX" && exit 0 DEV=$1 IMG="/root/ycos_client_xinit.tgz" FORMAT_FORCE="y" MOUNT_POINT="/mnt/" if [ $FORMAT_FORCE = "y" ];then fdisk $DEV<<EOF d 3 d 2 d n p 1 1 +4096M n p 2 +1024M n p 3 t 2 82 a 1 w EOF partx -a $DEV sleep 3 mkfs.ext4 -L /Amy ${DEV}1 mkswap -L /Swap ${DEV}2 fi mount -f ${DEV}1 $MOUNT_POINT tar zxvf $IMG -C $MOUNT_POINT mount ${DEV}1 $MOUNT_POINT tar zxvf $IMG -C $MOUNT_POINT grub-install --root-directory=$MOUNT_POINT --no-floppy --recheck $DEV umount $MOUNT_POINT
| 类别 | 选项 | 勾选 |
|---|---|---|
| 安全 | 电力保障 | UPS |
| 交换机 | 端口bonding | 双千兆绑定 |
| 网络 | 网络速率 | 100M~1000M |
| 人员 | 常用数量/总数量 | |
| 使用类型 | 办公,娱乐,安全 |
yum install -y inotify-tools rsync
/usr/local/bin/inotify_rsync.sh
#!/bin/sh SRC="/xx/" DST="/root/xx/" HOSTS="10.0.2.47 10.0.2.48" SSH_OPTS="-i/root/.ssh/id_rsa -p65422 -x -T -c arcfour -o Compression=no -oStrictHostKeyChecking=no" # Don't change below NUM=($HOSTS) NUM=${#NUM[*]} SPEED=$((100000/$NUM)) /usr/bin/inotifywait -mrq -e close_write,delete --format '%f' $SRC | while read files;do for ip in $HOSTS;do echo $files rsync -aP --bwlimit=$SPEED --delete -e "ssh $SSH_OPTS" $SRC root@$ip:$DST done done
nohup /usr/local/bin/inotify_rsync.sh 1>&2>/var/log/rsync.log &
yum install supervisor
[program:vcnagent] command=/opt/vcn/vdi/vcnagent "-info" ;environment=PATH=/opt/bin:/opt/sbin:%(ENV_PATH)s priority=999 autostart=true autorestart=true startsecs=10 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root log_stdout=true log_stderr=true logfile=/var/log/vcnagent.log logfile_maxbytes=1MB logfile_backups=10
bcdedit -set loadoptions DISABLE_INTEGRITY_CHECKS bcdedit -set TESTSIGNING ON