open-vm-tools tcz extension for TinyCore 6.4

For those who would like to install open-vm-tools on the newest TinyCore release, please find enclosed below a link to a compiled .tcz extension. It is still necessary to download some dependencies which are listed in build-dependencies. In my particular case for open-vm-tools to install and run on Core 6.4 it was necessary to download following extensions:

  • squashfs-tools

To learn how to build and install a TC-based VM with open-vm-tools on board, feel free to visit this link.


How to build your own yVM: step-by-step process

The following instructions were tested on VMware Player, standalone ESXi and clustered vSphere 6.0.


  1. Download Core 5.4
  2. Create a new VM with following characteristics:
  • 1vCPU
  • 64MB HDD (IDE0:0)
  • 64MB RAM (in the end you will switch it to 48MB but you may need more for the installation process)
  • Remove unnecessary peripherals such as floppy drive etc.

Core Installation

Once the boot process is completed, the first step is to install TinyCore on a hard drive. Run following commands to install binaries that you will need in later process:

$ tce-load -wi cfdisk.tcz grub-0.97-splash.tcz

$ sudo su

Check which drive is representing your IDE vmdk file. By default it should be /dev/sda or /dev/sda1 :

fdisk -l

Open cfdisk in order to create necessary persistent partitions :

$ cfdisk /dev/sda1 (sda stands for your VMDK file)

In cfdisk, create two partitions:

  • 1 x 32MB bootable partition and
  • 1 x Swap partition (code 82 for type of partition)

Note: After creating partitions do not forget about committing changes and setting the EXT partition as bootable!

exit cfdisk

rebuild the filesystem information:

$ rebuildfstab

Create an ext3 filesystem :
$ mkfs.ext3 /dev/sda1

(mount your newly created Linux partition and ISO file)
$ mount /mnt/sda1

$ mount /mnt/sr0

Create necessary folders where grub and boot files will be copied to:
$ mkdir -p /mnt/sda1/boot/grub

$ mkdir -p /mnt/sda1/tce

Copy boot and installation files :

$ cp -p /mnt/sr0/boot/* /mnt/sda1/boot/

$ touch /mnt/sda1/tce/mydata.tgz
$ cp -p /usr/lib/grub/i386-pc/* /mnt/sda1/boot/grub/

Create e a new grub menu list

$ vi /mnt/sda1/boot/grub/menu.lst

press “i” key to enter edit mode. then include the following information:

default 0
timeout 0
title <yourVMtitle>
kernel /boot/vmlinuz quiet
initrd /boot/core.gz

type “:wq” to exit and save changes.

Open Grub setup:
$ grub
In the grub prompt, type

$ root (hd0,0)
$ setup (hd0)
$ quit

type umount /mnt/sr0, then eject /dev/sr0
type reboot

Note: Make sure that ISO file is unmounted from your VM to avoid a boot from it during the restart.

Software installation

After the reboot you should end up in a bash shell. Continue with the installation of openssh, nano, nginx and open-vm-tools.

Let’s start with openssh:

$ tce-load -iw openssh
$ cd /usr/local/etc/ssh
$ sudo cp ssh_config.example ssh_config and then sudo cp sshd_config.example sshd_config
$ sudo /usr/local/etc/init.d/openssh start
$ passwd (and change password for user tc)
$ sudo passwd (and change password for user root)

From now on you may continue the installation process via an ssh session but beware, this is not the end! Do not reboot your VM yet.

Install nano:

$ tce-load -iw nano

Install nginx:

$ tce-load -iw nginx

Install open-vm-tools:

Download open-vm-tools binary and dependencies from here.

Unpack them and upload them via scp (or winscp if you use Windows OS):

$ scp /<unpacked_packages>/* tc@<yVM_ip_address>:/tmp

Install packages in the following order:

tce-load -i openssl.tcz openssl-dev.tcz  libdnet.tcz  open-vm-tools-modules-3.8.13-tinycore.tcz libtirpc glib2 fuse open-vm-tools

Check if all aforementioned packages are present in /mnt/sda1/tce/optional/

If not, copy missing tcz’s :

sudo cp /tmp/name_of_the_package.tcz /mnt/sda1/tce/optional/

Now save all your work before performing a reboot:

$ sudo nano /opt/.filetool.lst


$ sudo nano/opt/

# put other system startup commands here
/usr/local/etc/init.d/openssh start
/usr/local/etc/init.d/nginx start

$ sudo nano /mnt/sda1/tce/onboot.lst


verify that open-vm-tools is up and running:

$ sudo /usr/local/etc/init.d/open-vm-tools start

Eventually, save all configuration with the following command:

$ sudo -b

You may now shut down your VM, change the amount of necessary RAM to 48MB and reboot it again.

yVM: Download Page

yVM is a very small VM based on TinyCore 5.4 (kernel 3.8.13) and has open-vm-tools, nano, openssh and nginx installed. You can read a story about the the project here. If you want to know how to build your own yVM, please click here.

The current version has following characteristics:

  1. 1vCPU,
  2. 48MB RAM,
  3. 64MB HDD, divided into two partitions:
    1. one 32MB /dev/sda1 ext3-formatted partition
    2. one 32MB /dev/sda5 swap partition
  4. open-vm-tools installed as per instructions from Lapawa + some manual tweaking;
  5. openssh, nano and basic nginx installation (http only).
  6. available usernames: tc and root
  7. password for both accounts: VMware1!

vSphere-compatible OVA: download (Redirection to Google Drive, MD5 checksum: cff70e1fdcc4ef3307ff364e2f93c775).

yVM: An ultra small VM with VM tools

Home labs used to test virtualization solutions are very often limited by physical resources which either require to size down test environments or to accept a general slowness. This is particularly painful with pseudo-converged infrastructures or nested ESXi’s. Regarding the latter, my opinion is that it has a great drawback with the use of storage at 1/10 of available IOPS in comparison to a VM installed directly on a physical ESXi host.

This frequently results in a situation where a simple operation on a VM such as deployment from a template or via vRealize Automation, a DR test of a Protection Group with Site Recovery Manager or simple SDRS operations were taking dozens of minutes, thus blocking other operations on a nested cluster.

Since I was under pressure to provide some documentation for a project, I promised myself that in a free time I will look for a solution to tackle the issue I described above. More recently, I decided to look for small Linux distros that would fulfil the following conditions:

  1. be very small in size (in my particular case the IOPS was the limit)
  2. use a small amount of physical memory
  3. have VMware Tools or equivalent enabled

Initially, I found that there is no perfect solution to satisfy all of the aforementioned requirements. As a workaround I decided to make a lightweight VM myself only to land with an Ubuntu 14.04 with open-vm-tools, 192MB RAM and 5GB HDD (slimmed down to 2GB thanks to thin-provisioning). That was kind of a progress but was still fairly big in terms of storage.

Fortunately, many homelab and virtualization blogs mentioned Tinycore (here, here, here, here). VMware communities also provided a richness of information about this distro (here, here, here). Personally, until now I used to have some rather unsuccessful experience with TC but I decided to give it a chance (again). TC is a very interesting case of a lightweight Linux distro designed to run pretty fast even on a very old hardware. Alternatively, it is used for appliances due to its architecture which implemented busybox to make OS non-persistent across reboots.

I dig deeper in order to determine if it would be a right choice, especially with regards to Tools installation. The problem is that although creating a VM and making configuration persistent wasn’t much of a fuss, getting VMware tools working on top of it was a completely different story. My plan was to build a VM that would satisfy following scenarios:

a) Have a smallest footprint possible with regards to RAM and HDD

b) Have VMware Tools or Open VM Tools (which are officially recognized by VMware as an equivalent of VMware Tools) installed

c) Provide very basic services such as SSH server and HTTP server (in order to play around connectivity scenarios such as NSX or SRM)

A rather logical decision was to use the most up to date release of TinyCore 6.4 (based on Linux kernel 3.16) but after around 10 different iterations I landed with a VM that required around 116MB of RAM to be turned on successfully. Moreover, the stability was not perfect because sometimes during the boot process there was no memory left which forced TC to start killing off processes in order to avoid a kernel panic (which it eventually did anyway).

I was not discouraged by this and instead of using the newest distro I went back to earlier releases 3.x, 4.x and 5.x. I decided to install it from scratch through the cli via a core ISO (9MB in size) to make sure that only essential services are installed. If you want to know the step-by-step procedure to do it on your own, click here. I compiled open-vm-tools version 9.10 on a separate VM with an identical distro to leave all the rubbish behind and just obtain a clean .tcz extension with dependencies.

Eventually, I found that an initial concept developed itself into a small project which resulted with an extensively tweaked distro. I called it yVM, whereby the prefix is borrowed from -yocto measure base to highlight its small size. It is based on TinyCore 5.4 (kernel 3.8.13) and has the following characteristics:

  1. 1vCPU,
  2. 48MB RAM,
  3. 64MB HDD, divided into two partitions:
    1. one 32MB /dev/sda1 ext3-formatted partition
    2. one 32MB /dev/sda5 swap partition
  4. open-vm-tools installed as per instructions from Lapawa + some manual tweaking;
  5. openssh, nano and basic nginx installation (http only);
  6. available usernames: tc and root;
  7. password for both accounts: VMware1!.

Some remarks about the above release:

  1. The interface is CLI-only. I consider it to be destined purely for testing purposes in VMware-based environment without much interaction with the the OS itself (apart from connectivity tests i.e. ssh and http);
  2. I removed all unnecessary peripherals such as floppy drive etc.;
  3. In order to maintain backward compatibility with previous VMware vSphere environments, the vmx-hardware level is set to 8 (which is vSphere 5.0 compatible).

All in all, the VM has a size of 19MB when thin-provisioned, it still retains around 6MB of free RAM and 12MB of free HDD. The IP is obtained dynamically through DHCP, however that can be changed if necessary. It was tested succesfully in following scenarios:

  • Graceful shutdown, SDRS, vMotion and HA on vSphere 6.0;
  • Protection Group test, failover and failback with SRM 5.8;
  • VXLAN overlay and layer-2/3 connectivity via Edge Gateway with NSX 6.2;
  • Replication with VRM 5.8.

Soon I am planning to test it against VMware FT, as well as vROPs and VeeamONE. In my spare time I will also pre-package it into an OpenStack- and Docker-compatible image.

If you are interested in trying it out, feel free to download an OVA file by clicking here. (Redirection to Google Drive, MD5 checksum: cff70e1fdcc4ef3307ff364e2f93c775).

If you want to build your own small VM, plase read this post.