Sep 18

Vmware Virtual SAN (vSAN) All-Flash-Array (AFA) config/demo

Mar 14

vSphere 6.0 Autodeploy firewall issue with tftp on VCSA appliance

Sorry for the long winded title.  I do not usually blog much but figured I would throw this lil’ gold nugget out there in hopes that it helps someone else.  I have already contacted my federal VMware support engineer to notify them of my findings.  There seems to be an issue with using the vCenter Server Appliance as an all-in-one Autodeploy appliance as I used to in the vSphere 5.5 VCSA days.  Almost all of the previous pre-req’s and procedures for implementing Autodeploy on VCSA are the same as far as I can tell from vSphere 5.x to vSphere 6.x  With one huge takeaway/finding/head smasher into the keyboard discovery.  In order for the all-in-one VCSA Autodeploy server to actually hand out the correct TFTP info to the PXE booting ESXi host you will need to absolutely ensure that you add a iptables rule to the configuration.

Issue prior to iptables fix: (would get DHCP address but hang at TFTP boot)


It seems that when you enable the Autodeploy service from vCenter it is not adding/allowing TFTP connections to the atftpd service/daemon.


My initial gut feeling was it was related to host based iptables firewall.  Drop to the VCSA shell via ‘shell.set –enabled True’ then ‘shell’ command will drop you to a # prompt (root terminal).  A quick check of iptables -nL | grep 69 confirmed my suspicions. A search for firewall files via ‘find / -name *firewall* yielded many firewall scripts/services.  Far smarter people than I can sort out which file needs modified.  Seems that ‘/usr/lib/applmgmt/networking/bin/firewall-reload’ does a lot of the heavy lifting with references to other service firewall scripts.


Surgical additions of iptables ruleset additions to either the INPUT chain or the port_filter chains seemed to do the trick.  Syntax is as follows for one time fixes ‘iptables -A port_filter -m state –state NEW -i eth0 -p udp –dport 69 -j ACCEPT’.  The easiest way to get the udp port 69 to stick was the following.


To fix/resolve the TFTP hangs perform the following. (vi file and add bottom line)


Restart your VCSA 6.0 appliance/vCenter server and all should be working with Autodeploy again:



I must confess, this all of course started with me having to deploy the VCSA 6.0 appliance.  I cried a lil’ inside when I saw that we are once again forced down the path of using a Windows based server to deploy VCSA.  I thought the ova/ovf deployment model was spot on before.  Now you have to mount the VCSA 6.0 .iso to a Windows box, install the client integration plugin, then launch the VCSA install which queue’s up the installer and you can then target an ESXi node to lay down the VCSA appliance on.  Bring back the old deployment model is my vote!  Cool to see the VCSA 6.0 appliance looking more and more like a black box now w/ a DCUI’esque look to it though.  Solid work VMware!  Excited to vet the rest of the new suite/feature set.

May 19

vMotion over 10GigE network

I have been recently bitten by the ‘maximum power/speed bug’ and wanted to upgrade my vSphere lab to 10Gigabit networking.  I used Intel X520 DA2 10G SPF+ nics, an HP Procurve 2910 AL w/ two J9008A 10G SPF+ modules, and Twinax/DAC cables for connectivity.  vMotions, Storage vMotions, Fault Tolerance, and storage mounts over NFS all push near max theoretical 10Gigabit speeds once jumbo frames are tuned end-to-end on the physical switch, VMware distributed switch, vmkernel ports, and vnic interfaces of guest OS’s.  Below is a short video I have captured demonstrating vMotioning a large set of VM’s simultaneously over the 10Gigabit network as well as other network performance throughput tests.

[jwplayer mediaid="82"]

iperf_vm_to_vm_seperate_ESXi_hosts_10G_max_theoretical_throughput VM_to_OmniOS_All-in-One_10G_max_theoretical_throughput 10GbE_cacti_vSphere_graphs_during_iperf_pushing_9Gbps_between_two_vms_two_diff_hypervisor_hosts 10GbE_ESXi5b_CPU_overhead 10GbE_ESXi5c_CPU_overhead 10GbE_knoppix1_CPU_overhead 10GbE_knoppix2_CPU_overhead 2910al_10G_port_utilization 2910al_sh_interfaces_port-utilization zfs_san_zpool_status_zpool_list

Sep 16

Solaris 11, ZFS, COMSTAR tips/tricks

How-to permit root ssh logins on Solaris 11 GA
vi /etc/ssh/sshd_config
PermitRootLogin = yes

vi /etc/default/login
#CONSOLE =/dev/console

rolemod -K type=normal root

svcadm restart ssh

How-to setup static IP on Solaris 11 GA:

Logical IP/Interace config

netadm enable -p ncp DefaultFixed
ipadm create-ip net0
ipadm create-addr -T static -a net0/v4
route -p add default

OLD Solaris 10/11 IP config way
svcadm disable svc:/network/physical:nwam or svcadm disable nwam
svcadm enable svc:/network/physical:default or svcadm enable network/physical:default
ifconfig e1000g0 plumb
ifconfig e1000g1 plumb
system -> admin -> network (set new static IP info here)
activate/reboot to ensure settings stick
route -p add default (optional if needed)

DNS config on Solaris 11 GA

# svccfg
svc:> select name-service/switch
svc:/system/name-service/switch> setprop config/host = astring: “files dns”
svc:/system/name-service/switch> setprop config/ipnodes = astring: “files dns”
svc:/system/name-service/switch> select name-service/switch:default
svc:/system/name-service/switch:default> refresh
svc:/system/name-service/switch:default> validate

# svccfg
svc:> select dns/client
svc:/network/dns/client> setprop config/nameserver=net_address: ( 2001:4dd0:fd4e:ff01::1 2001:4dd0:fd4e:ff02::1 )
svc:/network/dns/client> select dns/client:default
svc:/network/dns/client:default> refresh
svc:/network/dns/client:default> validate
svc:/network/dns/client:default> exit

# svcadm enable dns/client

How-to setup CIFS on Solaris 11 Express
Reference –…

zfs create rpool/cifs
zfs set share=name=rpool_cifs,path=/rpool/cifs,prot=smb rpool/cifs
zfs set sharesmb=on rpool/cifs
chmod -R 777 /rpool/cifs
svcadm enable -r smb/server
svcadm restart smb (optional)

OPTIONAL/May be needed
smbadm join -w UNDERCOVERNERDS
vi /etc/pam.conf (other password required nowarn)
passwd whitey (reset passwd then cifs auth will work)

Show Shares – smbadm show-shares minithump
Unshare SMB – zfs unshare zfsbackup or zfs set sharesmb=off zfsbackup

Old Solaris 11 Express way
Reference – (
zfs create speedy/cifs
zfs create speedy/cifs/whitey
cd /speedy/cifs/whitey
chown whitey whitey
groupadd whitey
chgrp whitey whitey
cd whitey
echo ‘This is a test, ZFS rocks!’ > readme.txt
chown whitey readme.txt
chgrp whitey readme.txt
zfs set sharesmb=on speedy/cifs/whitey (speedy/cifs and speedy optionally, top level will inherit/propogate down)
zfs get all speedy/cifs/whitey (check sharesmb value to be set to on)
sharemgr show -vp
svcadm enable -r smb/server (restart/online smb server)
svcs | grep smb (look for online)
svcs | grep samba (optional, un-needed with Solaris 11 Express)
svcadm disable svc:/network/samba (optional, un-needed with Solaris 11 Express)
svcs | grep samba (optional, un-needed with Solaris 11 Express, nothing should show up)
smbadm join -w UNDERCOVERNERDS
vi /etc/pam.conf (other password required nowarn)
passwd whitey (reset passwd then cifs auth will work)
zfs set share=name=share,path=/hybrid-pl1/data,prot=smb,\guestok=true hybrid-pl1/data (Solairs 11 GA)…

How-to setup NFS on Solaris 11 GA
zfs create zfsbackup/nfs
share -F nfs -o rw /zfsbackup/nfs
zfs set sharenfs=on zfsbackup/nfs
chmod -R 777 /zfsbackup/nfs

Old Solaris 11 Express way
zfs create data/nfs
zfs create data/nfs/vmware
cd /data/nfs
zfs set sharenfs=on data/nfs/vmware
zfs set sharenfs=on data
zfs set sharenfs=rw,nosuid,root= data/nfs/vmware
zfs get all data/nfs/vmware (optional)
sharemgr show -vp (optional)
chmod -R 777 data/
zfs list -o space

Configuring Fibre Channel Devices with COMSTAR
pkg install storage-server
mdb -k
::devbindings -q qlc (note pciex instance and driver qlc)
update_drv -d -i ‘pciex1077,2532′ qlc
update_drv -a -i ‘pciex1077,2532′ qlt
reboot/init 6
mdb -k
::devbindings -q qlc (note driver is now qlt)
svcadm enable stmf
stmfadm list-target -v
fcinfo hba-port (optional)
zfs create -b 64k -V 10G rpool/iscsivol
sbdadm create-lu /dev/zvol/rdsk/rpool/iscsivol
sbdadm list-lu
stmfadm list-lu -v
stmfadm add-view 600144F0B152CC0000004B080F230004
stmfadm list-view -l 600144F0B152CC0000004B080F230004

How-to setup COMSTAR iSCSI on Solaris 11 Express
pkg install storage-server
svcs stmf
svcadm enable stmf (svcs -a | grep stmf)
svcadm disable iscsitgt (optional)
svcadm enable iscsi/target
stmfadm list-state (optional/validate)
zfs create cmds here (zfs create -b 64k -V 10G rpool/iscsivol) (-s for sparse/thin)
sbdadm create-lu /dev/zvol/rdsk/rpool/iscsivol
sbdadm list-lu
stmfadm list-lu -v
stmfadm add-view 600144F0B152CC0000004B080F230004
stmfadm list-view -l 600144F0B152CC0000004B080F230004
itadm create-tpg e1000g0
itadm create-target -t e1000g0
itadm list-target -v

Delete VMFS/lun from vSphere client/ESX/i host
sbdadm list-lu
sbdadm delete-lu 600144f09f24820000004e1504c80001 (LUN GUID/32 characters)
zfs destroy cmds

CHANGE iSCSI Target/Target Portal Group
itadm delete-tpg -f net0
itadm list-target
itadm delete-target (-f to force
stmfadm offline-target
itadm delete-target

(GROW iSCSI ZFS zvol hosting VMware VI)
zfs create -b 64k -V 50G tank/vmware/iscsi/iscsivol04
sbdadm create-lu /dev/zvol/rdsk/tank/vmware/iscsi/iscsivol04
stmfadm add-view 600144f09f24820000004f0877ed0002
Add to vSphere move VM’s onto iSCSI LUN
zfs set volsize=100G tank/vmware/iscsi/iscsivol04
sbdadm modify-lu -s 100G 600144f09f24820000004f0877ed0002
In vSphere go to zvol iSCSI volume properties and increase to grow VMFS volume

echo | format (look for new iSCSI lun/disk)
svcs -a | grep “iscsi”
iscsiadm add discovery-address
iscsiadm modify discovery –sendtargets enable
iscsiadm list discovery
iscsiadm list target
devfsadm -C -i iscsi
echo | format
zpool create testpool c0t600144F0B152CC0000004B080F230004d0

Snapshots and Cloning ZFS filesystems and DR w/ ZFS send/receive
zfs snapshot data/nfs/vmware@rhel_monday
zfs list -t snapshot (optional snapshots are held in .zfs/snapshot)
zfs list -o space
zfs clone data/nfs/vmware@rhel_monday data/nfs/rhel_clone
zfs set sharenfs=on data/nfs/rhel_clone (mount clone of snapshot in ESX from this mountpoint)
zfs set quota=10G data/nfs/rhel_clone
zfs destroy data/nfs/rhel_clone
(ZFS SEND/RECEIVE-DR Remote Replication of ZFS Data)
zfs send data/nfs/vmware@rhel_monday | ssh zfs recv tank/rhel_clone
oldbox# zfs send backup@1 | pv -s 1015G | nc -l 3333
newbox# nc 3333 | pv -s 1015G | zfs receive data/backup

INCREMENTAL ZFS send/recv – zfs send -i tank/vmware/nfs@120920172011 tank/vmware/nfs@121020172011 | ssh zfs recv -F backup/vmware

More examples:

How-to setup zfs root pool mirror…

How-to setup Link Aggregation

dladm -create-aggr -L passive -l igb0 -l e1000g0 aggr1
ifconfig aggr1 plumb up
ifconfig aggr1 netmask up

trunk eth 10,12 trk5 lacp (on HP procurve switch)

dladm show-link/show-phys/show-aggr/show-vlan

How-to setup jumbo frames…

For e1000g (Solaris):
Change /kernel/drv/e1000g.conf to:

VLAN trunking on Solaris
dladm create-vlan -l aggr1 -v 11
ifconfig aggr11001 plumb
ifconfig aggr11001 netmask up

How-to setup XDMCP on Solaris 11 Express
On Solaris 11 svcadm enable graphical-login/gdm # it was already enabled.

:In /etc/gdm/custom.conf, set on both machines: [security]



1. Edit /etc/gdm/custom.conf, adding “Enable=true” under [xdmcp] section
2. # svccfg -s svc:/application/x11/x11-server setprop options/tcp_listen = true
3. # svcadm restart gdm

Disk device mapping
paste -d= <(iostat -x | awk ‘{print $1}’) <(iostat -xn | awk ‘{print $NF}’) | tail -n +3

Random useful commands
iostat -xn

zpool iostat -v hybrid-pl1 2

sysconfig configure

dladm show-phys/show-link/show-aggr

ipadm show-if/show-addr

ipadm enable-if -t net0

Sep 06

pfSense…it just makes sense

OK, so over the years I have been a firm believer in ‘rolling my own’ firewall to protect my home network via an iptables based simple shell script enabling masquerading and NAT functionality.  For a long time I have known about untangle, shorewall, ufw, firestarter, monowall, IPCop, vyatta, and the host of iptables based firewalling solutions.  Recently after speaking to a colleague of mine that I hold in high regard I made the switch to pfSense due to the power, flexibility, security posture, feature set, and ease of use.

I am not going to go into the details of getting pfSense up and running as the documentation is fairly through.  The tipping point that convinced me to try pfSnese out was the fact that they have recently ported the firewall to an appliance in .ovf format.  Perfect I thought, I can just deploy ovf from vCenter and get off to the races instead fo downloading the .iso and spinning up the VM.

A little background on my LAN topology.  For a few years now I have been terminating my WAN connection to a dedicated gigE network interface on one of my ESXi hosts.  This allows great flexibility for how I intend to use the capabilities within pfSense (DMZ zone, more on that later).  So let’s dive into some of the functionality I have deployed to date which includes NAT, SNP for cacti graphs of pfSense, static routes to support internal VPN’s, site-to-site VPN’s, and road-warrior (dial-in VPN access).

NAT/Port Forward How-to

Below I will demonstrate how to setup a simple NAT translation to an internal LAN service.

From the top of the pfSense web menu select Firewall -> NAT


Click the upper right + icon to add a new NAT rule.  Fill in required fields