At the office we recently moved from building. Roughly 4 years ago we moved out so it could be remodeled, but now, the wait is over and we are at our new place. It has been a lot of hard work for our IT team, but I think we managed to do it very well.

What this means for me is that I have been dealing with a lot of work aside of my usual DevOps tasks, so a couple of projects are suffering from a delay. I really need to get the ball rolling here so that I can make up for lost time and that means: a better workflow.

For a couple of months now I have been working remotely from the old building on my laptop in Linux (where I usually code) and the other tasks I do them on my desktop company PC running Windows 10 through RDP with Remmina. This has worked like a charm... until now that I'm here at the new office where I don't really need my laptop anymore.

At first, I tried to setup my dev environment in a virtual machine with Virtualbox, but my computer has frozen so many times breaking havoc (seems that my computer doesn't handle virtualization very good and struggles with processing, albeit it having a core i7 processor...) that I'm not confident with that setup anymore. Also, the downside to that approach was that, since I was already running on a virtualized environment, I couldn't run Homestead so... bummer!

I needed to find another solution for this, thus enters my lab server which is used for spinning up virtual machines when needed for testing and trying out new stuff so, ¿why not take advantage of it? It is running VMware ESXi 6.7 and handles virtualization like a pro as it is a dual-core Xeon processor workstation with 32GB of RAM.

For this new setup, I needed two main things:

  1. My dev environment would now live in the hypervisor, which I would need to access remotely and have clipboard and folders seemingly shared so I can work cross-platform but feel that I'm using just one computer (I have 3 monitors therefore).
  2. I would need to spin up Homestead from within my remotely accessed dev environment without hassle as if it were a normal setup but in reality it would also live in the hypervisor. It should be accessible (via HTTP) from my dev machine and additionally from my main desktop machine too.

So, first thing first...

Setting up my dev environment

This was the easy part. As you've probably have guessed, I just needed to install Ubuntu 18.04.1 64-bit in a new virtual machine set up with 8GB of RAM. I already have developed a modular host-configurable script to prepare the environment as I like for when I need to reinstall each of my devices, so setting the environment was just a matter of patience.

I run the guest machine through the VMRC so I can have it on a monitor fullscreen (and let's face it, using the VM through a web page just doesn't seem right).

Synced clipboard

For the shared/synced clipboard I needed to install the VM Tools (either the VMware ones or the recommended open-vm-tools, to do so I ran:

sudo apt install open-vm-tools-desktop

Additionally I needed to configure the virtual machine to have a synced clipboard. This is done in the Configuration Parameters for the VM in the VMware ESXi web console (VM > Edit > VM Options > Advanced > Configuration Parameters > Edit Configuration...). I just set isolation.tools.copy.disable and isolation.tools.paste.disable both to FALSE and that's it, rebooted the VM and I had synced clipboards to copy/paste between host and guest.
2018-08-02_19h37_25-1

Shared folders

This part was a bit trickier because it needed more configuration. For sharing folders what I am using is Samba, as it's the native protocol that Windows uses and can be configured in Ubuntu pretty easily too.

For this I needed to install the samba package (because I wanted bi-directional sharing so I could access Win->Ubuntu or Ubuntu->Win indistinctly) to run a samba server on my dev environment:

sudo apt install samba

Once installed I modified the configuration:

sudo vim /etc/samba/smb.conf

Inside this file I added the following lines under the [general] config group:

[general]
    client min protocol = SMB2
    client max protocol = SMB3
    unix extensions = no

This makes our Ubuntu protocols compatible with what Windows expects.

Now we need to configure our Ubuntu share. At the bottom of the /etc/samba/smb.conf file I added the following:

[Home$]
    path = /home/axel
    browseable = yes
    valid users = @axel
    force group = axel
    create mask = 0644
    directory mask = 0755
    writeable = yes

The final step is adding a samba password for my user:

sudo smbpasswd -a axel

That's it! I now share my Home folder with Windows and I have also a share from Windows which I connected through Nemo from my dev environment. Now I can copy files between hosts. Awesome!

Note: I also tried setting up an NFS share because it seems to be faster, but, although I managed to get it working on Windows, sometimes the file transfers would freeze, so I'm happy to sacrifice speed for stability.

On to the next step...

Spinning Homestead in the Hypervisor

With my dev environment already set up, it was time to getting Homestead working. I have to say this task was not easy and took 3 days of trying out a lot of things (many of which I don't remember anymore), although at the end I have this setup working fine. So let's dig in...

The first thing is that vagrant needs to know that I want to create the virtual machine in the ESXi hypervisor instead of a local installation of Virtualbox or VMware. For that there is a great Vagrant plugin written by Jonathan Senkerik aptly named vagrant-vmware-esxi.

I just followed the instructions to install it, which basically comes down to installing the requeriments and the plugin. Obviously, since we will be using Vagrant, we need it installed. For this I'm using version 2.1.2, which as of now is the latest release.

The plugin requieres the OVF Tool, which can be downloaded from here. The latest version at the time of writing is 4.3.0 (you will need to create an account with VMware). Once installed I moved on to installing the plugin itself with:

vagrant plugin install vagrant-vmware-esxi

Version 2.4.0 is the latest at the time of writing.

Note: If you are having problems installing the OVF tool and get the No protocol specified error you need to allow apps to connect to your session:

xhost + local:
sudo ./VMware-OVF-Tool.bundle
xhost - local:    # Revert changes

As per the requierements I needed to activate SSH on my hypervisor, which I did. I also added a specific account in ESXi to be used with this setup to provision Homestead.

As I would be mounting the shared folders through NFS, I also installed the nfs-kernel-server package:

sudo apt-get install nfs-kernel-server

After this things got a bit complicated as I needed to figure out how the hell to tell the Homestead script to use my ESXi server and that included several steps.

First I set provider: vmware_esxi in the Homestead.yaml file, but I needed a way to pass the plugin parameters to the vagrant configuration so I ended up modifying the scripts/homestead.rb file to accomodate this. I added lines 64-79 in homestead.rb.

After introducing this changes my configuration file looks like this. Let's focus on some lines:

esxi_hostname: The VMware ESXi host.
esxi_username: The username I created specifically for Homestead in the Hypervisor to authenticate with.
esxi_password: One of the authentication mechanism provided by the plugin.
esxi_disk_store: The disk storage to use to create the virtual disk.
esxi_virtual_network: The networks that should be assigned for the interfaces.
esxi_debug: Allows to set the plugin in debug mode.

I created a file containing the password of the account used to authenticate with ESXi in ~/.secrets/vmware_esxi.pwd to be used instead of having the password inside this file.

The list of networks for esxi_virtual_network is the list of defined networks in ESXi which will be assigned to the network interfaces created for the virtual machine. In this case I have set up 3 interfaces:

  • eth0 -> Default interface created for vagrant management purposes.
  • eth1 -> Interface for Homestead private network (if I need a contained homestead network later on).
  • eth2 -> Interface for LAN.

There's a caveat that I found here (which I don't like very much but it's the way it is for now), which is that Vagrant does not allow to assign a static IP to the management network interface (which in this case makes sense as the VM is not within the same host). So in order for this to work, there should be a DHCP server on the network to assign an IP to the management interface. Without this, the VM won't be able to be provisioned as it will not be accessible.

So I got the ball rolling:

vagrant up

It took some time as the box was downloaded to my dev machine and transferred to ESXi using the OVF Tool, so... patience... lots...

Output:

Bringing machine 'homestead-7' up with 'vmware_esxi' provider...
==> homestead-7: Virtual Machine will be built.
VMware ovftool 4.3.0 (build-7948156)
==> homestead-7: ---   --- ESXi Summary ---
==> homestead-7: --- ESXi host       : esxi_hypervisor
==> homestead-7: --- Virtual Network : ["Default", "Homestead Network", "Local Network"]
==> homestead-7: --- Disk Store      : storage1
==> homestead-7: --- Resource Pool   : /
==> homestead-7: ---  --- Guest Summary ---
==> homestead-7: --- VM Name         : homestead
==> homestead-7: --- Box             : laravel/homestead
==> homestead-7: --- Box Ver         : 6.2.0
==> homestead-7: --- Memsize (MB)    : 1024
==> homestead-7: --- CPUS            : 1
==> homestead-7: --- Guest OS type   : ubuntu-64
==> homestead-7: ---   --- Guest Build ---
Opening VMX source: /home/axel/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/6.2.0/vmware_desktop/ZZZZ_homestead.vmx
Opening VI target: vi://[email protected]_hypervisor:443/
Deploying to VI: vi://[email protected]_hypervisor:443/
Transfer Completed                    
Completed successfully
==> homestead-7: --- VMID            : 42
==> homestead-7: --- VM has been Powered On...
==> homestead-7: --- Waiting for state "running"
==> homestead-7: --- Success, state is now "running"
==> homestead-7: --- Configuring     : 192.168.10.10/255.255.255.0 on Homestead Network
==> homestead-7: --- Configuring     : 172.16.1.200/255.255.0.0 on Local Network
    homestead-7: 
    homestead-7: Vagrant insecure key detected. Vagrant will automatically replace
    homestead-7: this with a newly generated keypair for better security.
    homestead-7: 
    homestead-7: Inserting generated public key within guest...
    homestead-7: Removing insecure key from the guest if it's present...
    homestead-7: Key inserted! Disconnecting and reconnecting using new SSH key...
==> homestead-7: Pruning invalid NFS exports. Administrator privileges will be required...
==> homestead-7: Setting hostname...
==> homestead-7: Exporting NFS shared folders...
==> homestead-7: Preparing to edit /etc/exports. Administrator privileges will be required...
==> homestead-7: Mounting NFS shared folders...
==> homestead-7: Running provisioner: file...
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: inline script
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: inline script
    homestead-7: 
    homestead-7: ssh-rsa [very long key text removed for security]
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: inline script
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: /tmp/vagrant-shell20180803-8410-1s3kurs.sh
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Creating Certificate: mysite.test
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Creating Site: mysite.test
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: inline script
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Checking for old Schedule
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Clear Variables
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Restarting Cron
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Restarting Nginx
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Creating MySQL Database: mysite
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Creating Postgres Database: mysite
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: script: Update Composer
    homestead-7: You are running composer as "root", while "/home/vagrant/.composer" is owned by "vagrant"
    homestead-7: Updating to version 1.7.0 (stable channel).
    homestead-7:    
    homestead-7: Use composer self-update --rollback to return to version 1.6.5
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: /tmp/vagrant-shell20180803-8410-yfist9.sh
==> homestead-7: Running provisioner: shell...
    homestead-7: Running: /tmp/vagrant-shell20180803-8410-1kigaer.sh

But what is that Exporting NFS shared folders... asking for my password? Well, NFS shared folders need to be configured so that the client can connect to them, so Vagrant exports the shared folders from its configuration into the file /etc/exports like so:

# VAGRANT-BEGIN: 1000 42
"/home/axel/code" 172.16.1.200(rw,no_subtree_check,all_squash,anonuid=1000,anongid=1000,fsid=1468729)
"/home/axel/.homestead" 172.16.1.200(rw,no_subtree_check,all_squash,anonuid=1000,anongid=1000,fsid=4987536)
# VAGRANT-END: 1000 42

I have found that sometimes there was a permissions error mounting the shares from Homestead and I solved them specifying the network mask in CIDR format like 172.16.1.200/16 or setting a configuration entry outside the # VAGRANT-BEGIN and VAGRANT-END tags with the correct IP address.

Another error I found is that in the /etc/hosts file there's an entry for the hostname that resolves to the IP 127.0.1.1. When mounting the NFS folders sometimes this IP will be used and the mount won't work. I just commented that line out in the /etc/hosts file and it worked. I started an issue in the vagrant-vmware-esxi plugin's repository about this.

I'm quite happy with how it all came out at the end. Now I can do all my work using "one" device but having both OS at my disposal at the same time (no dual-booting kung-fu anymore) without sacrificing usability with respect to vagranting with Homestead. I'm sure I will be still refining this architecture for the following weeks as quirks reveal itself, but as of now I'm loving working like this.

Now, to enjoy the new office... wait, there's still work to do! Damn!