If you like this blog, please make a little donation.

It's a secure process, and with your generosity I will be able to review more hardware and software.

Just click the "Donate" button below and follow the easy instructions, and I will thank you eternally.

Saturday, April 30, 2011

Windows 8 Milestone 1 (build 7850)

Having official access to all Microsoft pre-release software can be a quite entertaining thing.

This is time to test the first Milestone of Windows 8. Yes I know that the lastest version is the third Milestone, but things should come in order.

For it I used both VMWare and QEMU.

Windows 8 is still too early in the development, and there is not a long list of new features.

The installation indeed, still shows "Windows 7" images and names everywhere.


I created a virtual machine of type "Windows 7" with 1 Gb of virtual RAM and 40 Gb of virtual hard disk.

With these configuration, in my laptop and under VMWare, Windows 7 would have took about 4 hours to install. However Windows 8 took about just an hour, counting all installation phases and reboots.

At first boot it automatically started downloading the VMWare's sound card driver without asking.

At the rightmost down corner there appears the icon I choose for the user, as a fast access to user-oriented tasks, like user settings, locking the PC, switching users, so on.


The desktop background also remembers us to not leak that software, with a pretty hard advice.
This made me laugh, because of course, no one cared about that advice, as well as they don't care when introducing as new features the ones they copied from other operating systems.


Let's make fun of Microsoft :p.

So in the search for any new feature, application, only three things showed up.


I've never seen that device in any Windows version, I'm not sure what is it about, maybe something for the new UI (not yet introduced in this build) or whatever.


I'm not sure if this option is in Windows 7 Enterprise, it is not in Windows 7 Ultimate and I've never heard of it so I consider it new unless otherwise stated.

This option allows you to burn a CD or DVD with Windows Preinstallation Environment (the same used for installation) and recovery utilities, like the RAM tester, the filesystem integrity checker, so on.


And finally, Internet Explorer 9 Beta. While Internet Explorer 9 is on retail now, downloadable for Windows Vista and Windows 7, the version included here is different. I suppose it is separate for evolution to 9.1 with new features introduced by Windows 8 itself.

Appart of that three differences, the Windows PowerShell shell is included in the installation.

The system seemed quite stable, but installing the VMWare Tools just made VMWare Fusion badly crash.

On QEMU, however, the system was unable.


After 6 hours in that same screen, we can consider the system stuck and QEMU is unable to emulate it.

Stay tuned for Milestone 2 report.

Friday, April 22, 2011

Copying GPS data from mobile phone photos to camera photos

Currently there are a lot of cameras with GPS units, integrated or as an option. But that covers only a handful of models.

There are also units designed to work standalone, but usually are standalone units, with less battery than the cameras, and quite expensive just for geotagging our photos.

However currently almost all of us use a GPS enabled smartphone. Name it an iPhone, Android, Symbian^3 (Nokia) or BlackBerry. And of course any photo we take with them is geotagged.

So the easiest and cheapest way, is to transfer the geotagging (GPS) data from a photo taken with our smartphone, to the photos taken with our camera.

For this, we can use the excellent exiftool utility.

This tool is available in all platforms. For Linux we can use our distribution package manager to install it, for Macintosh I recommend using MacPorts (or downloading it from the official site) and for Windows we can download it from the site.

exiftool is a really complete tool to manage and modify the EXIF information (that one that contains the camera manufacturer, model, lenses used, GPS information, so on) of the photos, with an extensive set of options. It does not have a graphic interface so using it on command line is a must (Terminal or Command Prompt).

We simply take one photo, does not need to be the greatest one, or useful at all, on every place we are, with the smartphone, and take note (mentally or physically) of what photos of the camera correspond to that place. There is not a need to take a photo with the smartphone if we move only a couple of meters, like for example inside the same building or square. If we need that exactitude, this is not the solution we want.

Once we arrive home or the hotel, we copy the camera photos and the smartphone photos to the same folder (I recommend using a folder per place), and then in the command line we run exiftool inside that folder with the following arguments:

exiftool -tagsfromfile IMG_0774.JPG -gps:all -P DSC*

IMG_0774.JPG is the photo taken with the smartphone, and DSC* represents all files whose name starts with DSC. Usually all cameras store the photo files named something like DSC00001.JPG, in case your camera uses a different scheme just change DSC to the appropriate thing.

This command will copy ONLY the GPS data from the smartphone photo, leaving all the data in the camera photos (as well as the date it was taken) untouched. It will also create a copy of the original photo with _original suffixed (DSC00001.JPG copies to DSC00001.JPG_original).

Once you checked the photos are ok (just for security) you can delete the smartphone and _original files.

When you put your photos on your favourite organizer (Picasa, Flickr, Lightroom, iPhoto, Aperture) you'll see that all your camera photos now have the correct location (the same one you where on when you took the photo with the smartphone) automatically.

This is not as exact as other solutions, but it's free, fast and automatic.

Wednesday, April 20, 2011

Updating LVM to RAID5, the troublemaker.

From many years ago, hades has been running as my fileserver with an LVM array of disks.

LVM is a Linux feature that allows to use more than one disk in just one volume, and offers no protection on a disk failure.

And disks failed, a lot of times, and thanks to the luck, a simple cloning of the old disk to a newer and bigger one solved the problem.

And the LVM growed, and its filesystem growed, and it is time to update it.

The LVM contained the following disks:

Hades 6

Two Seagate ST31500541AS (1500 Gb each), one Seagate ST3750640AS (750 Gb), one Samsung HD501LJ (500 Gb) and one Samsung HD401LJ (400 Gb), making a total of 4650 Gb of storage (minus partitioning, LVM and XFS overhead).

It was time to migrate to a safer thing, so I bought the following:

Hades 10

5 hard disks, 5 SATA USB enclosures and a more powerful 800W power supply unit. It's not that hades' 600W PSU had any fail, but I prefer to have more than enough power, because when I assembled hades for the first times it did have a lot of power outages, because a bad quality PSU was unable to give the promised power, and the hard disks started to stop randomly. This day hades received his nickname, "the troublemaker".

It is also the time to put a new SCSI card, the spare Adaptec AHA-2940U2.

Hades 8

This controller substitutes the previous Adaptec AHA-29160N SCSI card to drive the IBM Ultrium LTO2 drive used to backup all in The old controller used to hang on power on, on transfer, and finally died absolutely (again, "the troublemaker").

The USB enclosures are to put the old LVM hard disks, do the migration, and then have spare external drives to use from time to time (friends are requesting me to gift them, forget about it).

Hades 3

Everything put in place and connected, hades is a mess of cables. The orange ones are the SATA cables connected to the new disks, the gray one connects to the system disk, the blue one to the DVD-ROM drive and the multicolored one is the SCSI cable, connected to the tape drive and the terminator.

Everything assembled it's time to power up, and then, hear "beep beep beep", great!
First I though it was that the graphics card was loose, however a visual inspection showed the following:

Hades 2

At first it's difficult to see, but in close inspection:

Hades 1

A leaked capacitor. Again the troublemaker honors its name.

It has been running for months in this condition, now is time to fail. So I had to buy a new graphics card.

Hades 4

A super-uber ATi Radeon HD 3450, the absolutely max in performance. Yes, I'm kidding, this is a fileserver, I don't care the performance of the graphics card, it's just I did not had any working one at hand and this was the cheapest available. It's also the first time I assembly an ATi graphics card in a machine, I always use nVidia ones.

But hades still did not want to boot. Now, a RAM stick was malfunctioning, so putting it out, halving the RAM of hades to just 1Gb. Troublemaker never rests!
However everything closed by then for Easter, and will have to wait until monday to get more RAM.

Finally it booted, so I connected 4 of the LVM disks on the USB enclosures and one on the only available SATA port on hades motherboard and started the fun.

Linux booted correctly, except for KDE, expected as it was configured for a nVidia card, not an ATi one.

The drives order was a complete mess. System disk moved from sdf to sdk, and the rest of the disks got in messed order (SATA, SATA, USB, SATA, USB, so on), but the old LVM volume mounted without a problem.

I created one "Linux RAID" partition inside a GUID Partition Table inside each new disk, and created a RAID5 using each freshly created partition as a unit.

RAID5 is a system that joins 3 or more disks, sparing the data between all of them, and creating recovery data that is also spared between the disks. The total recovery data equals in size one of a full disk, so in my case, I used 5 Seagate 2 Tb disks obtaining a RAID5 of 8 Tb of total available space (and 2 Tb used for the recovery data).

It's quite easy to create, just did mdadm --create --verbose /dev/md0 --level=5 --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdh1 /dev/sdi1 and in less than a minute the RAID5 was created.

As soon as the RAID5 was created, the whole 2 Tb of recovery data started to be created, slowing down all operations. So that to create the XFS filesystem, it took 5 minutes. I don't want to imaging the time creating an ext4 filesystem will take, as it is more than 50 times slower than creating a XFS one.

I did a test copy of 37 Gb of data from the LVM to the RAID5, and it took 28 minutes. Simple math say that's 1.3 Gb/min. and as I have to copy 3900 Gb that makes 50 hours, yes, two days and more.

Hades 5

I don't think USB enclosures have ever been tested for a full 50 hours runtime, neither their PSUs, but we'll see how this goes.

I pray that The Troublemaker gives me 50 hours of peace and no problems :p

OpenIndiana build 148

It's time for to get a huge hardware update, five new hard disks, a software RAID configuration. So I though it maybe was time also for an operating system change.

OpenIndiana takes the role where OpenSolaris left it.
OpenSolaris is the opensource version of the old and good Sun's Solaris UNIX variant released on 5th May 2008 with great hype from the community. However when Oracle bought Sun on November 2010, they closed again the source and OpenSolaris was abandoned.

OpenIndiana continues the development of what OpenSolaris started, and sounded like a good choice for an operating system change for, so I downloaded the lastest release, build 148, to test it under virtualization.

I started with lastest stable QEMU build, even if in Mac OS X it's not a virtualization package but an emulation one, as I don't care for speed.

Unfortunately some bug on the QEMU emulation prevented OpenIndiana's live environment to load, so I had to move to another virtualization package, in this case VMWare Fusion.

The installation is easy, when you boot from the DVD image you get a live session where you can simply use the system or start an installation.

The installation was easy and not informative at all.

Special to note, in the partitioning tool you cannot chose what filesystem to use, or how many slices to create, just a "Solaris2" option for the partition you want to install OpenIndiana into. This really means to create a ZFS pool with about 8 volumes (/usr, /, swap, /var, so on) in the chosen partition. There is no way to say put "/usr" on another ZFS pool on another partition or disk.

15 minutes later, the system was installed and booted, so time to play with the package manager.

OpenIndiana comes with a good number of packages, all the drivers from OpenSolaris (including binary drivers from ATi and nVidia with 3D support), and almost all the software anyone would need. To note that Mono was absolutely missing from the available packages, as well as any KDE or QT thing.

However no package installed or updated, receiving a message saying that I were in a Live environment. I rebooted a couple of times, but to no luck, no way to install anything or update.
This is fun because when you're running Live on most Linux distributions (like Gentoo or Ubuntu) you can install updates and packages, on RAM and deleted on reboot, but you can install them.

Also when I tried to eject the installer CD, no luck. From the GUI, from the Terminal, there was absolutely no way to eject the inserted CD image but to instruct VMWare to force the ejection.

OpenIndiana looks very very promising, but it's not yet stable for production. will stay on Linux.

Tuesday, April 19, 2011

IDEFile to Disk Copy 4.2 conversion

For a while I've been searching for any Twiggy disk image from an Apple Lisa 1 computer, if possible of the Lisa Office installation disks and its applications. However I've had no luck at all.

One day I found on YouTube a video of a guy repairing his Twiggy floppy drive, and I asked him for images. Unfortunately he did not have any, but sent me an image that he used for booting, using the IDEFile adaptor on his Lisa 1.

The image is not in Disk Copy 4.2 as expected by any of the emulators (LisaEm or IDLE), but seemed a simple dump of the 532 bytes per sector disk (yeah, Lisa uses 532 bytes per sector instead of 512, but that's another history).

I checked the image with a hexadecimal editor and saw that the first 20 bytes are the tags for the sector and the remaining 512 bytes are the data of the sector. So I started to create a simple converting tool that read the tags, and the sectors, and using libdc42 from Ray Arachelian, created a Disk Copy 4.2 image with the same data.

However when I checked the sectors of the DC42 image, and checked it on lisafsh-tool (from LisaEm too), they were out of order.

It was time to sleep, and in my dreams I started thinking about interleaving.

The day returned and I went on the work, checking the correct order for the sectors, I saw that indeed there was some kind of interleaving on the IDEFile image.

Sector 0 become 0 on IDEFile, 1 become 5, 2 become 10, so on.

After checking 64 sectors I saw that the sequence was always [0, 4, 8, 12, 0, 4, 8, -4, 0, 4, -8, 0, -12, -8, -4].

It was then a simply math modulus of the expected sector and 16 to get the exact offset inside that sequence, adding it to the expected sector and getting then the IDEFile sector. Multiplied by 532 gives the exact byte offset where to start reading.

I added that piece of code to my little tool, and converted the IDEFile image again to DC42.

Trying to boot it with LisaEm (romless) just made LisaEm enter in an infinite loop.

I booted LisaEm in Lisa Office 3.1 and inserted the converted image as an alternate profile, configured Lisa Office to see that profile, rebooted and voilá!


Lisa Office 3.1 recognized the disk as from an older Lisa OS version. You MUST read it very carefully, because the dialog is practically the same as when you insert an uninitialized disk. I clicked "Update".


Waited for 5 minutes and...


Here they are, the contents of the IDEFile image as seen by LisaEm :p

Just of curiosity, I checked out the attributes of the disk.


As you can see, the filesystem version is 14 (0x0E), the same used by Lisa Office 1 and Lisa Pascal Workshop 1.

So the update is not updating the filesystem structures, just the Lisa Office on disk structures.

I've made the source of my utility under the BSD license, that is compatible with libdc42's GPL license, and published it in this link.

From what I've read on IDEFile's webpage it stores more than one image on the hard disk using some "partition" scheme, but I don't know how to extract individual images from them unless I get a whole dump.

Sunday, April 17, 2011

One network filesystem to rule them all, SMB vs AFP

We live in a world of Windows.

We use our pendrives with Windows filesystems, our networks are connected with the Windows network filesystem, all of that whatever we use or not the Windows operating system.

The Windows network filesystem, SMB (from Server Message Block, also called CIFS), was designed in the 80s for DOS and OS/2. It is supported on all other filesystems by the Samba package.
Windows Vista introduced a new version SMB2, that removes all the unneeded commands and simplify the needed ones, giving a huge performance overall.

Apple however always used AFP (from Apple Filing Protocol), their own network filesystem. The Windows server versions can act a server for this filesystem, as well as mostly all the UNIX systems using the netatalk package.

Unfortunately only Mac OS and Mac OS X can act as clients. And usually, aren't used so.

Because of commodity, being too vague or any other reason, in networks consisted only of Macintosh computers or in mixed environments, we tend to use always the SMB protocol.

The following benchmark demonstrate why SMB should be avoided at all, whenever its possible.

The test were done using bonnie++, over a Gigabit Ethernet network, with both a Mac OS X client and server.

Version 1.93c Sequential Output Sequential Input Random
Concurrency Size Per Char Block Rewrite Per Char Block
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU
AFP 1 2000M 347 98 15924 7 16234 10 689 98 130749 16 1361 36

Latency 112ms 1620ms 1279ms 36316us 1169ms 221ms
SMB1 1 2000M 2 8 11416 8 4504 6 1 8 7642 5 787.2 19

Latency 6048ms 1448ms 2001ms 6385ms 565ms 402ms

In this test we can clearly see the advantages of using AFP over SMB.

Writing byte at byte, SMB gets 2 Kb/sec while AFP gets 347 Kb/sec. That's 173 times faster.
Writing sequentially, SMB gets 11,416 Kb/sec while AFP gets 15,924 Kb/sec. That's 39% faster.
Rewriting, the difference gets bigger, SMB at 4,504 Kb/sec and AFP at 16,234 Kb/sec, or 260% faster.

On reading, byte at byte, SMB gets just 1 Kb/sec and AFP gets 689 Kb/sec, of course, 689 times faster.
Reading in blocks, SMB gets 7,642 Kb/sec and AFP gets to the limits at 130,749 Kb/sec, or 1610% faster.

SMB is able to do 787.2 random seeks per second and AFP is able to do 1361, almost twice.

Also on all cases, the CPU usage is higher when using SMB.

Conclusion is clear, on all possible cases (Mac, UNIX or Linux server and Mac client) AFP is a MUST.

There are commercial solutions that make Windows computers able to act as an AFP server and client.
However my recommendation is not to use them, but directly to not use Windows computers in networked environments.

When Mac OS X Lion comes public I'll test their SMB2 implementation that should give a boost in performance.

The ultimate Linux filesystems benchmark

Having a car accident makes you have a lot of free time.

So I decided to create the ultimate Linux filesystems benchmark, took a spare hard disk I had available and tested all the filesystems writable by my Linux system, a 2.6.33-gentoo-r2 kernel.

For the tests I used bonnie++ as root, with the command line "bonnie++ -d /mnt/iozone/test/ -s 2000 -n 500 -m hpfs -u root -f -b -r 999".

My Linux system suppots a quite big number of writable filesystems, being: AFFS, BTRFS, ext2, ext3, ext4, FAT16, FAT32, HFS, HFS+, HFSX, HPFS, JFS, NTFS, Reiser, UFS, UFS2, XFS and ZFS.

I will divide the filesystems in "foreign unsupported", "foreign supported" and "native" filesystems.

A "foreign unsupported" is a filesystem designed and created outside Linux, for another operating system, but it's lacking features needed by Linux to run flawlessly, like the POSIX permissions, links and so on. These are AFFS, FAT16, FAT32, HFS and HPFS.

A "foreign supported" is a filesystem designed and created outside Linux, for another operating system, but supporting all features needed by Linux to run flawlessly, even if the current Linux implementation do not implement them fully. These are HFS+, HFSX, JFS, NTFS, UFS, UFS2, XFS and ZFS.

A "native" is a filesystem designed by the Linux community of developers, with Linux in mind, and usually never found outside Linux systems. These are BTRFS, ext2, ext3, ext4 and Reiser.

AFFS, or Amiga Fast File System is the old AmigaOS 2.0 filesystem, still in use by AROS, AmigaOS and MorphOS operating systems. It does not support the features required by Linux and is not an option for disks containing anything but data.

BTRFS, or B-Tree File System is a filesystem specifically designed to substitute old Linux native filesystems, including online migration of data from the extX filesystems.

ext2, or Extended 2 File System is an evolution of the original Linux ext filesystem, designed to overcome the limits of the Minix filesystems used in first Linux versions. ext3 is an evolution that added journaling and ext4 is another evolution that changes some structures to newer and faster technologies.

FAT, or File Allocation Table, is the original Microsoft filesystem, introduced with Microsoft Disk BASIC in the 80s and extensively used for data interchange, mobile phones, photo cameras, so on. FAT16 is an evolution of the original FAT (retroactively called FAT12) to allow for bigger disks, and FAT32 is the ultimate evolution introduced in Windows 95 OSR2 to overcome all the limitations. However Microsoft abandoned it in favor of NTFS for hard disks and ExFAT (currently not writeable by Linux) for data interchange.

HFS, or Hierarchical File System, is the original Apple filesystem introduced with Mac OS 5.0. It's design is heavily influenced by Mac OS design itself, requiring support for resource forks, icon and window position, etc.

HFS+, or Hierarchical File System Plus is an evolution of HFS to modernize it including support for UNIX features, longer file names, multiple alternate data streams (not only the resource forks). Current Linux implementation of HFS+ does not support all features from it, like for example journaling. HFSX is a variant of HFS+ that adds case-sensitive behavior, so that files "foo" and "Foo" are different ones.

HPFS, or High Performance File System is a filesystem designed by Microsoft for OS/2 to overcome the limitations of the FAT filesystem.

JFS, or Journaled File System is a filesystem designed by IBM for AIX, a UNIX variant, and so it supports all the features required by Linux. Really it's name is JFS2, as JFS is an older AIX filesystem that was deprecated before JFS2 was ported to Linux.

NTFS, or New Technology File System, is a filesystem designed by Microsoft for Windows NT to be the server-perfect filesystem. It was designed to support all features needed by all the operating systems it pretended to serve: OS/2, Mac OS, DOS, UNIX. Write support is not available directly in the Linux kernel but in userspace as NTFS-3G, and it does not implement all the features of the filesystem, neither all those required by Linux.

Reiser filesystem is a filesystem designed by Hans Reiser to overcome the performance problems of the ext filesystems and to add journaling. Its last version, Reiser4, is not supported by the Linux kernel because of reasons outside the scope of this benchmark.

UFS, or UNIX File System, also called BSD Fast File System is a filesystem introduced by the BSD 4 UNIX variant. UFS2 is a revision introduced by FreeBSD to add journaling and overcome some of the limitations. Support for both filesystem has been "experimental" in Linux for more than a decade.

XFS, or eXtended File System, is a filesystem introduced by Silicon Graphics for IRIX, another UNIX variant, to overcome the limitations of their previous filesystem, EFS. Linux's XFS is slightly different from IRIX's XFS, however both are interchangeable.

ZFS, or Zettabyte File System, is a filesystem introduced by Sun Microsystems for their Solaris UNIX variant. It introduces a lot of new filesystem design concepts and is still being polished. Support on Linux is provided by an userspace driver and not in the kernel itself.

The tests check the speed to write a 2 Gb file, rewrite it, read it, re-read it randomly, create 500 files, read them and delete them.

AFFS, FAT and HPFS filesystems failed the create/read/delete files test with some I/O error. In their native operating systems they have no problem to create 500 files so it's a problem in the Linux's implementation.

The output from Bonnie++ is:

Version 1.96 Sequential Output Sequential Input Random
Sequential Create Random Create
Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU
AFFS 2000M 6014 2 5065 3 14887 5 115.2 9
Latency 9545ms 10553ms 16000ms 842ms Latency
BTRFS 2000M 26126 11 14151 8 69588 5 35.0 262 500 14 72 55300 99 13 56 14 72 35273 99 8 47
Latency 431ms 626ms 166ms 6071ms Latency 6067ms 1712us 32642ms 6632ms 934us 7227ms
ext2 2000M 27592 4 13381 3 57779 3 424.1 5 500 84 63 401150 99 114 0 98 70 745 99 97 28
Latency 326ms 526ms 23808us 279ms Latency 522ms 214us 31756ms 483ms 8831us 878ms
ext3 2000M 27190 9 13522 3 58337 4 392.0 6 500 177 1 293176 99 129 0 177 1 392430 99 154 0
Latency 754ms 626ms 49656us 193ms Latency 859ms 876us 1200ms 844ms 205us 1497ms
ext4 2000M 28852 5 13832 3 57969 3 366.6 5 500 246 2 281186 99 155 1 246 2 372310 99 202 1
Latency 4026ms 658ms 97049us 183ms Latency 903ms 869us 1061ms 900ms 183us 951ms
FAT16 2000M 28201 6 13317 4 55201 3 354.9 6
Latency 326ms 230ms 103ms 269ms Latency
FAT32 2000M 26065 6 13317 4 54575 3 333.0 6
Latency 326ms 230ms 103ms 269ms Latency
HFS 2000M 28761 5 13272 5 50507 11 324.8 3
Latency 426ms 526ms 16808us 150ms Latency
HFS+ 2000M 28705 3 13201 2 57350 4 472.7 2 500 3864 29 3513 99 5342 96 5338 43 248171 99 7754 67
Latency 426ms 426ms 18678us 281ms Latency 100ms 341ms 3064us 16396us 165us 753us
HFSX 2000M 28805 3 13353 2 57481 4 482.1 2 500 3899 28 3547 99 5451 96 5431 43 255570 99 7859 66
Latency 227ms 410ms 24543us 212ms Latency 152ms 338ms 2542us 14442us 211us 1452us
HPFS 2000M 25198 19 13488 6 47790 11 159.1 4 500 6980 97 216652 99 15720 46 7393 82 329146 99 16662 56
Latency Latency 14546us 1853us 3224us 9178us 180us 1510us
JFS 2000M 28486 4 13855 2 59308 3 546.6 3 500 7836 25 379769 99 2289 8 218 2 392966 99 96 0
Latency 679ms 626ms 34570us 136ms Latency 179ms 316us 747ms 2523ms 558us 67889ms
NTFS-3G 2000M 28324 6 12480 3 55439 5 156.5 1 500 5822 12 11353 18 8403 14 6280 14 13918 18 1025 2
Latency 52838us 127ms 20516us 296ms Latency 130ms 144ms 444ms 108ms 25397us 427ms
Reiser 3.5 2000M 27557 11 13567 4 58166 5 462.2 9 500 1704 10 328480 99 1785 10 1498 9 398475 99 448 3
Latency 626ms 683ms 14264us 164ms Latency 1299ms 657us 1882ms 1269ms 207us 2843ms
Reiser 3.6 2000M 27478 11 13472 4 58109 5 459.7 9 500 1458 8 319832 99 1404 8 1284 7 395686 99 396 3
Latency 626ms 526ms 13669us 202ms Latency 2405ms 672us 2078ms 1424ms 205us 3293ms
UFS 2000M 4484 4 14018 4 57305 7 417.1 8 500 102 86 417263 99 1252 6 103 87 414744 99 176 57
Latency 410ms 567ms 81779us 185ms Latency 488ms 164us 110ms 469ms 537us 497ms
UFS2 2000M 4472 4 13333 3 56329 6 431.9 7 500 100 87 419216 99 1257 6 102 87 418432 99 161 53
Latency 426ms 331ms 100ms 512ms Latency 487ms 185us 111ms 467ms 51us 537ms
XFS 2000M 28707 4 13759 3 55647 4 408.4 5 500 559 5 70286 98 557 4 637 6 110524 99 102 0
Latency 5226ms 530ms 29344us 260ms Latency 1555ms 14286us 1897ms 1421ms 277us 8480ms
ZFS 2000M 19540 3 12844 2 32664 2 152.5 1 500 753 4 37453 10 142 0 781 4 399620 99 51 0
Latency 2562ms 2926ms 261ms 2268ms Latency 2531ms 61859us 2965ms 2800ms 203us 5209ms

The table is quite long so comparing graphs following:


This is the table for a single file sequential writing speed. The UFS and AFFS denote an abnormally slow speed, indicating a big need for optimization.
The fastest ones are, without a big difference, ext4, HFSX and XFS.


This is the table for a single file sequential reading speed. AFFS needs optimization also here.
The fastest one with a big difference is BTRFS and the second fastest one is JFS.

This two tables indicate the speed on writing and reading big files. If this is your intended usage, the recommended ones are BTRFS, ext4, HFSX, HFS+, XFS and JFS.
However if you need the maximum portability between systems, HFS+ is the choice because it's readable and writable by more operating systems than the others.

The following graphs show a more normal filesystem usage, where a lot of files are created, read and deleted at the same time.


This test shows surprising results. Reiser takes the lead on native filesystems, while even the newest BTRFS is quite behind all the other native ones.
In foreign filesystems, HPFS takes the lead. However, because of HPFS design limitations, the choice is NTFS, even being run on userspace.


In randomly reading 500 files, the results invert. NTFS loses its lead to UFS2. ext3 shows a huge improvement to ext2, and inversely, BTRFS shows a huge lose of performance.


Finally deleting all files, the lead is taken again by HPFS, second by HFS+ and HFSX.

Unless you want to write files and never delete them, the obvious choice is HFS+ or HFSX.

Surprisingly, even not being on its native implementation, HFS+ provides the best combination in all tests.
It supports all of the features required by Linux, gives a good sequential speed, and maintains its speed when handling a big number of files.

The only bad thing is that Linux does not support the journaling. Not a problem if you take care on unmounting it cleanly, without "oops! what happen to the electricity".  Of course you can always spam the Linux kernel developers so they implement that function of the HFS+ filesystem that exists from 2003, 8 years ago.

Speed does matter

In today's world, speed does matter.

We want to arrive as fast as possible to our works, to our homes, to our beds.

Then why are we still using slow networks in our homes?

When we have more than one computer in our homes, we usually connect them using an ISP provided switch, either by wireless or using cabled ethernet.

And those equipment is limited to a theorical limit of 54 Mbps when wireless, 100 Mbps when cabled. Even if our computers do support 300 Mbps wireless or 1000 Mbps cabled, we are limited by that piece of hardware, the slowest one, the switch.

And speed DOES matter, a lot indeed.

I did a little test using netstrain, a simple command line utility that being run on two computers measures the speed of the network between them.

Theorically a 100 Mbps cabled link should be able to transfer up to 12 Megabytes per second, in practice netstrain showed a slower speed of 8 Megabytes in practice.

Using a not so expensive Gigabit (1000 Mbps) switch gives a theorical transfer up to 120 Megabytes per second, 78 Megabytes per second in practice.

Yes, that's a 10 times faster speed, using just a 25 € Gigabit switch!!!

That means that transferring a file will take 10 times less time!

Stop using that crappy switch, spend a little bucks and get a Gigabit switch.

And forget all about wireless, unless there is a real and important need for it.

Cable is efficient to 80%, wireless is efficient to 60% under ideal conditions.
That ideal conditions stop to exist when you turn your microwave owen on, your neighbor receives a phone call in his new wireless phone, or you live in a building filled with ISP gifted wireless switches (like mine, with 25 different wireless networks on sight), making wireless efficiency fall to 10% and online gaming an impossible achievement.

Saturday, April 16, 2011

USB booting on PowerPC Macintosh

It is commonly known that only G5 PowerMacintosh and iMac computers can boot from USB, and of course all Intel Macintosh computers.
However I found this assumption to be false.

While trying to test MorphOS on my Mac Mini G4 computer I encountered with a common problem of that computers. The DVD drive does not read anything.
Unfortunately this is a common problem with slot-in drives, as dust enters in the whole drive, even inside the laser lens assembly.

But I just though, that maybe, there was a way, and entered OpenFirmware with the usual Command-Option-O-F key combination on boot chime, and listed the device tree with the usual "dev / ls" command.

Pending on the Mac Mini's southbridge (/pci@f2000000) I found the usb devices, one empty and one with a HUB device (the keyboard), so at least controllers and hubs are supported by OpenFirmware.

I powered off, connected an external slim Asus DVD recorder (really a LG DVD recorder) to the USB ports and entered again on OpenFirmware, listed the devices and to my surprise a "disk@1" device appeared just hanging under one of the usb controllers.

I did a reboot and entered the OpenFirmware's boot menu pressing the Option key on boot chime, but the USB disk did not appear here.

Rentering OpenFirmware and constructing an adequate boot command (boot /pci@f2000000/usb@19/disk@1:2,\\:tbxi) booted MorphOS correctly.
I recorded all the installation and posted it on Youtube.

With my curiosity at maximum I took an older computer, my beloved PowerBook G4 Titanium, attached the USB drive and tested the same.

OpenFirmware worked, loaded the MorphOS bootloader, this loaded the MorphOS kernel, and, as expected because the PowerBook is not supported, it hang.

I took all the Mac OS disks and a couple of Linux disks to test it on the PowerBook and came to the following conclusions:

1.- USB booting is supported by all Mac OS X bootloaders, but Mac OS X <= 10.3 do not load the USB MSC drivers at boot and the kernel is then unable to find the root volume.
2.- USB booting is unsupported at all by Mac OS 9, because it requires Toolbox drivers and they only exist for ATA, ATAPI and SCSI. Dunno why FireWire boots, may be because a Toolbox driver is present in ROM. 
3.- USB booting is supported fully by MorphOS and by Linux, however all Linux distributions I tested expect to load Yaboot configuration file from "cd:" device alias, and you must point that to your exact USB device in OpenFirmware for Yaboot to work.
4.- If your last boot disk is an USB one, it DOES appear on the boot menu. If it's not the case, it will never do, and you must boot manually from OpenFirmware.


Hi readers,

I'm Natalia Portillo, an currently I'm a 26 years old girl.

I'm a computer technician, programmer, filesystems engineer and cybergoth.

I'm from Canary Islands, in Spain.

In this blog I pretend to expose my experiences, describe tests I do on software and hardware, and everything computer-related I see fit.

If you want to check my software, direct to my company's webpage at

If you want to check my personal writings, poems and so direct to my writings blog at Blogger

If you want to check my personal webpage go to my personal webpage

I hope you enjoy this blog