May 19

vMotion over 10GigE network

I have been recently bitten by the ‘maximum power/speed bug’ and wanted to upgrade my vSphere lab to 10Gigabit networking.  I used Intel X520 DA2 10G SPF+ nics, an HP Procurve 2910 AL w/ two J9008A 10G SPF+ modules, and Twinax/DAC cables for connectivity.  vMotions, Storage vMotions, Fault Tolerance, and storage mounts over NFS all push near max theoretical 10Gigabit speeds once jumbo frames are tuned end-to-end on the physical switch, VMware distributed switch, vmkernel ports, and vnic interfaces of guest OS’s.  Below is a short video I have captured demonstrating vMotioning a large set of VM’s simultaneously over the 10Gigabit network as well as other network performance throughput tests.

[jwplayer mediaid="82"]

iperf_vm_to_vm_seperate_ESXi_hosts_10G_max_theoretical_throughput VM_to_OmniOS_All-in-One_10G_max_theoretical_throughput 10GbE_cacti_vSphere_graphs_during_iperf_pushing_9Gbps_between_two_vms_two_diff_hypervisor_hosts 10GbE_ESXi5b_CPU_overhead 10GbE_ESXi5c_CPU_overhead 10GbE_knoppix1_CPU_overhead 10GbE_knoppix2_CPU_overhead 2910al_10G_port_utilization 2910al_sh_interfaces_port-utilization zfs_san_zpool_status_zpool_list

Sep 26

Couple of handy Virtualization guides

 

vSphere_Design_Pocketbook

Mastering_VMware_vSphere_4

 

 

 

 

master_vmware_vsphere

vSphere_Design_Pocketbook_eBook

 

 

 

Sep 16

Solaris 11, ZFS, COMSTAR tips/tricks

How-to permit root ssh logins on Solaris 11 GA
vi /etc/ssh/sshd_config
PermitRootLogin = yes

vi /etc/default/login
#CONSOLE =/dev/console

rolemod -K type=normal root

svcadm restart ssh

How-to setup static IP on Solaris 11 GA:

Logical IP/Interace config

netadm enable -p ncp DefaultFixed
ipadm create-ip net0
ipadm create-addr -T static -a 192.168.2.187/24 net0/v4
route -p add default 192.168.2.254

OLD Solaris 10/11 IP config way
svcadm disable svc:/network/physical:nwam or svcadm disable nwam
svcadm enable svc:/network/physical:default or svcadm enable network/physical:default
ifconfig e1000g0 plumb
ifconfig e1000g1 plumb
system -> admin -> network (set new static IP info here)
activate/reboot to ensure settings stick
route -p add default 192.168.2.254 (optional if needed)

DNS config on Solaris 11 GA

http://blog.tschokko.de/archives/1007

# svccfg
svc:> select name-service/switch
svc:/system/name-service/switch> setprop config/host = astring: “files dns”
svc:/system/name-service/switch> setprop config/ipnodes = astring: “files dns”
svc:/system/name-service/switch> select name-service/switch:default
svc:/system/name-service/switch:default> refresh
svc:/system/name-service/switch:default> validate

# svccfg
svc:> select dns/client
svc:/network/dns/client> setprop config/nameserver=net_address: ( 2001:4dd0:fd4e:ff01::1 2001:4dd0:fd4e:ff02::1 )
svc:/network/dns/client> select dns/client:default
svc:/network/dns/client:default> refresh
svc:/network/dns/client:default> validate
svc:/network/dns/client:default> exit

# svcadm enable dns/client

How-to setup CIFS on Solaris 11 Express
Reference – https://blogs.oracle.com/paulie/entry/cifs_sharing_on_solaris_11

http://docs.oracle.com/cd/E23824_01/html/821-1462/share-smb-1m.html#scro…

zfs create rpool/cifs
zfs set share=name=rpool_cifs,path=/rpool/cifs,prot=smb rpool/cifs
zfs set sharesmb=on rpool/cifs
chmod -R 777 /rpool/cifs
svcadm enable -r smb/server
svcadm restart smb (optional)

OPTIONAL/May be needed
smbadm join -w UNDERCOVERNERDS
vi /etc/pam.conf (other password required pam_smb_passwd.so.1 nowarn)
passwd whitey (reset passwd then cifs auth will work)

Show Shares – smbadm show-shares minithump
Unshare SMB – zfs unshare zfsbackup or zfs set sharesmb=off zfsbackup

Old Solaris 11 Express way
Reference – (http://breden.org.uk/2008/03/08/home-fileserver-zfs-setup/)
zfs create speedy/cifs
zfs create speedy/cifs/whitey
cd /speedy/cifs/whitey
chown whitey whitey
groupadd whitey
chgrp whitey whitey
cd whitey
echo ‘This is a test, ZFS rocks!’ > readme.txt
chown whitey readme.txt
chgrp whitey readme.txt
zfs set sharesmb=on speedy/cifs/whitey (speedy/cifs and speedy optionally, top level will inherit/propogate down)
zfs get all speedy/cifs/whitey (check sharesmb value to be set to on)
sharemgr show -vp
svcadm enable -r smb/server (restart/online smb server)
svcs | grep smb (look for online)
svcs | grep samba (optional, un-needed with Solaris 11 Express)
svcadm disable svc:/network/samba (optional, un-needed with Solaris 11 Express)
svcs | grep samba (optional, un-needed with Solaris 11 Express, nothing should show up)
smbadm join -w UNDERCOVERNERDS
vi /etc/pam.conf (other password required pam_smb_passwd.so.1 nowarn)
passwd whitey (reset passwd then cifs auth will work)
zfs set share=name=share,path=/hybrid-pl1/data,prot=smb,\guestok=true hybrid-pl1/data (Solairs 11 GA)

http://docs.oracle.com/cd/E23824_01/html/821-1462/share-smb-1m.html#scro…

How-to setup NFS on Solaris 11 GA
zfs create zfsbackup/nfs
share -F nfs -o rw /zfsbackup/nfs
zfs set sharenfs=on zfsbackup/nfs
chmod -R 777 /zfsbackup/nfs

Old Solaris 11 Express way
zfs create data/nfs
zfs create data/nfs/vmware
cd /data/nfs
zfs set sharenfs=on data/nfs/vmware
zfs set sharenfs=on data
or
zfs set sharenfs=rw,nosuid,root=192.168.2.10:192.168.2.20 data/nfs/vmware
zfs get all data/nfs/vmware (optional)
sharemgr show -vp (optional)
chmod -R 777 data/
zfs list -o space

Configuring Fibre Channel Devices with COMSTAR
pkg install storage-server
mdb -k
::devbindings -q qlc (note pciex instance and driver qlc)
$q
update_drv -d -i ‘pciex1077,2532′ qlc
update_drv -a -i ‘pciex1077,2532′ qlt
reboot/init 6
mdb -k
::devbindings -q qlc (note driver is now qlt)
$q
svcadm enable stmf
stmfadm list-target -v
fcinfo hba-port (optional)
zfs create -b 64k -V 10G rpool/iscsivol
sbdadm create-lu /dev/zvol/rdsk/rpool/iscsivol
sbdadm list-lu
stmfadm list-lu -v
stmfadm add-view 600144F0B152CC0000004B080F230004
stmfadm list-view -l 600144F0B152CC0000004B080F230004

How-to setup COMSTAR iSCSI on Solaris 11 Express
(TARGET)
pkg install storage-server
svcs stmf
svcadm enable stmf (svcs -a | grep stmf)
svcadm disable iscsitgt (optional)
svcadm enable iscsi/target
stmfadm list-state (optional/validate)
zfs create cmds here (zfs create -b 64k -V 10G rpool/iscsivol) (-s for sparse/thin)
sbdadm create-lu /dev/zvol/rdsk/rpool/iscsivol
sbdadm list-lu
stmfadm list-lu -v
stmfadm add-view 600144F0B152CC0000004B080F230004
stmfadm list-view -l 600144F0B152CC0000004B080F230004
itadm create-tpg e1000g0 192.168.2.118
itadm create-target -t e1000g0
itadm list-target -v

DEPROVISION iSCSI LUN
Delete VMFS/lun from vSphere client/ESX/i host
sbdadm list-lu
sbdadm delete-lu 600144f09f24820000004e1504c80001 (LUN GUID/32 characters)
zfs destroy cmds

CHANGE iSCSI Target/Target Portal Group
itadm delete-tpg -f net0
itadm list-target
itadm delete-target iqn.1986-03.com.sun:02:8191cb2e-9d90-4be3-c829-d5ce7b8f041c (-f to force
or
stmfadm offline-target iqn.1986-03.com.sun:02:8191cb2e-9d90-4be3-c829-d5ce7b8f041c
itadm delete-target iqn.1986-03.com.sun:02:8191cb2e-9d90-4be3-c829-d5ce7b8f041c

(GROW iSCSI ZFS zvol hosting VMware VI)
zfs create -b 64k -V 50G tank/vmware/iscsi/iscsivol04
sbdadm create-lu /dev/zvol/rdsk/tank/vmware/iscsi/iscsivol04
stmfadm add-view 600144f09f24820000004f0877ed0002
Add to vSphere move VM’s onto iSCSI LUN
zfs set volsize=100G tank/vmware/iscsi/iscsivol04
sbdadm modify-lu -s 100G 600144f09f24820000004f0877ed0002
In vSphere go to zvol iSCSI volume properties and increase to grow VMFS volume

(INITIATOR)
echo | format (look for new iSCSI lun/disk)
svcs -a | grep “iscsi”
iscsiadm add discovery-address 192.168.56.102:3260
iscsiadm modify discovery –sendtargets enable
iscsiadm list discovery
iscsiadm list target
devfsadm -C -i iscsi
echo | format
zpool create testpool c0t600144F0B152CC0000004B080F230004d0

Snapshots and Cloning ZFS filesystems and DR w/ ZFS send/receive
zfs snapshot data/nfs/vmware@rhel_monday
zfs list -t snapshot (optional snapshots are held in .zfs/snapshot)
zfs list -o space
zfs clone data/nfs/vmware@rhel_monday data/nfs/rhel_clone
zfs set sharenfs=on data/nfs/rhel_clone (mount clone of snapshot in ESX from this mountpoint)
zfs set quota=10G data/nfs/rhel_clone
zfs destroy data/nfs/rhel_clone
(ZFS SEND/RECEIVE-DR Remote Replication of ZFS Data)
zfs send data/nfs/vmware@rhel_monday | ssh 192.168.2.112 zfs recv tank/rhel_clone
OR
oldbox# zfs send backup@1 | pv -s 1015G | nc -l 3333
newbox# nc 172.16.210.9 3333 | pv -s 1015G | zfs receive data/backup

INCREMENTAL ZFS send/recv – zfs send -i tank/vmware/nfs@120920172011 tank/vmware/nfs@121020172011 | ssh 192.168.2.77 zfs recv -F backup/vmware

More examples: http://www.128bitstudios.com/2010/07/23/fun-with-zfs-send-and-receive/

How-to setup zfs root pool mirror

http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-orac…

How-to setup Link Aggregation

http://breden.org.uk/2008/04/05/home-fileserver-trunking/

dladm -create-aggr -L passive -l igb0 -l e1000g0 aggr1
ifconfig aggr1 plumb up
ifconfig aggr1 172.16.77.187 netmask 255.255.255.0 up

trunk eth 10,12 trk5 lacp (on HP procurve switch)

dladm show-link/show-phys/show-aggr/show-vlan

How-to setup jumbo frames

http://www.alekz.net/archives/318

http://blog.allanglesit.com/2011/03/solaris-11-network-configuration-adv…

For e1000g (Solaris):
Change /kernel/drv/e1000g.conf to:
MaxFrameSize=3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3;

VLAN trunking on Solaris
dladm create-vlan -l aggr1 -v 11
ifconfig aggr11001 plumb
ifconfig aggr11001 192.168.2.188 netmask 255.255.255.0 up

How-to setup XDMCP on Solaris 11 Express
On Solaris 11 svcadm enable graphical-login/gdm # it was already enabled.

:In /etc/gdm/custom.conf, set on both machines: [security]
DisallowTCP=false

[xdmcp]
Enable=true

OR

1. Edit /etc/gdm/custom.conf, adding “Enable=true” under [xdmcp] section
2. # svccfg -s svc:/application/x11/x11-server setprop options/tcp_listen = true
3. # svcadm restart gdm

Disk device mapping
paste -d= <(iostat -x | awk ‘{print $1}’) <(iostat -xn | awk ‘{print $NF}’) | tail -n +3

Random useful commands
iostat -xn

zpool iostat -v hybrid-pl1 2

sysconfig configure

dladm show-phys/show-link/show-aggr

ipadm show-if/show-addr

ipadm enable-if -t net0

Sep 12

UPNP/DLNA Home Media Server powered by PMS

Playstation Media Server + XBMC client = The perfect media center

Sep 06

pfSense…it just makes sense

OK, so over the years I have been a firm believer in ‘rolling my own’ firewall to protect my home network via an iptables based simple shell script enabling masquerading and NAT functionality.  For a long time I have known about untangle, shorewall, ufw, firestarter, monowall, IPCop, vyatta, and the host of iptables based firewalling solutions.  Recently after speaking to a colleague of mine that I hold in high regard I made the switch to pfSense due to the power, flexibility, security posture, feature set, and ease of use.

I am not going to go into the details of getting pfSense up and running as the documentation is fairly through.  The tipping point that convinced me to try pfSnese out was the fact that they have recently ported the firewall to an appliance in .ovf format.  Perfect I thought, I can just deploy ovf from vCenter and get off to the races instead fo downloading the .iso and spinning up the VM.

A little background on my LAN topology.  For a few years now I have been terminating my WAN connection to a dedicated gigE network interface on one of my ESXi hosts.  This allows great flexibility for how I intend to use the capabilities within pfSense (DMZ zone, more on that later).  So let’s dive into some of the functionality I have deployed to date which includes NAT, SNP for cacti graphs of pfSense, static routes to support internal VPN’s, site-to-site VPN’s, and road-warrior (dial-in VPN access).

NAT/Port Forward How-to

Below I will demonstrate how to setup a simple NAT translation to an internal LAN service.

From the top of the pfSense web menu select Firewall -> NAT

pfSense_NAT1

Click the upper right + icon to add a new NAT rule.  Fill in required fields

pfSense_NAT2