Category Archives: System Administration

How to Stop your Synology NAS from Junking up your directories with @eaDir directories

If you start mounting Synology volumes over NFS, you will quickly learn that the Synology NAS drops directories cryptically named “@eaDir,” in every single subdirectory on your data volumes. 

They are hidden from Windows clients, but they are there.

The “@eaDir” directories are created for convenience by a system daemon, and they apparently contain image thumbnails or some such nonsense.  There is no easy or convenient way to turn them off or otherwise stop them from being created.

Getting rid of them takes some effort, and here is the easiest way – simply disable the system daemon.

Disable the Synology deamon the Creates the @eaDir directories

To stop the thumb service from creating the @eaDir directories, SSH into your NAS and stop the daemon.  This will keep new directories from being created until the next boot.

/usr/syno/bin/synomkthumb -stop

Next, to remove the service from starting up when rebooting, delete the script:

rm /usr/syno/etc.defaults/rc.d/

Removing the existing directories

SSH into your NAS and you can locate them by typing:

find /volume1/ -type d -name "@eaDir"

Finally, when you are feeling good, you can automatically search and delete them:

find . -type d -name "eaDir" -print0 | xargs -0 rm -rf

[Linux] How to Send E-MAIL (or SMS) Whenever a User Logs In

In the past, I’ve written about some of the con artist masquerading as consultants, which I’ve run into during my travels as a technical mercenary.

At one gig, a younger and inexperienced team lead was conflicted about canning a developer that wasn’t even showing up for work, but who claimed to be working remotely.

Of course, I checked the logs and he never logged in.

The team lead wanted more data, so I suggested that whenever the developer logs in, the lead would get an email.

“You can do that?”  the lead asked. 

Easy.  The solution is to add a few lines to the shell init script in the user’s home directory.  In a few minutes, it was done.

This is also a nice way to shoot yourself an SMS message via your cell phone companies email-SMS gateway when someone logs into one of your cloud instances.  It will give you an immediate notification if someone compromises a system.

In any event, the solution is extremely easy.

Put something similar to the following in the user’s .profile (csh):

mail << EOF
From: Linux System
subject: user login
user $LOGNAME has logged into `hostname`


Server Naming Conventions?

I’m in the process of provisioning yet another new server, so I decided to confront the challenging issue of what to name the server.

Interestingly enough, I’m not the only one who has wrestled with this problem.  There is even a service which tracks naming schemes (

What did I choose? Football teams, Mythological deities? Planets? Cartoon characters?  Presidential pets?  James Bond movie villains?

No, I decided to use the periodic table of elements.  Servers are stable noble gasses while test machines are unstable elements.

Should You Self-Host Your Blog or Website?

Several years ago, I started an experiment to see if self-hosting would be better than shared hosting for some of my websites.  I promptly split my websites between self-hosted, shared hosting, and a managed WordPress service.  I also tested out Amazon’s EC2 service, and RackSpace.


Self-hosting is by far the cheapest way to go.  You are probably already paying for an internet connection, so why not use it?

I setup a mac mini as the web server and served up several static websites over my DSL line.  I can’t express how much I like having full control, and the ability to fiddle with the system to improve performance.  I could also see automated hacking attempts flood in all over the world (but mostly from China).

However, when Qwest was bought out by CenturyLink, I started having problems.  I had a few DSL outages, then the billing issues started.  They stopped sending me billing notices and terminated my automatic debit for my naked DSL connection.

By far the worst DSL outage I had was a day and a half, where the fix was to reset my DSL modem to factory settings and setup the DSL modem from scratch.  Second worst outage I had was the result of a power outage while I was on vacation in Florida (the mac mini won’t automatically restart after a power failure).

Conclusion: Self-hosting isn’t all it is cracked up to be.

Shared Hosting

I had one website hosted at  The website was originally written in ASP, but I’ve since rewritten the site in an html/css/JavaScript site generated by Perl templates.

My experience with them mirrors my experiences with other low cost shared hosting providers — you are put on an overtaxed server with very little memory and disk space and reliability is an issue.  For the first year, the site would go down about once a month.

On a side note, shared hosting does NOT affect your Google search ranking, but response time will.

Conclusion: Shared hosting is not for me.

Amazon Web Services (AWS)

Next, I setup an instance on Amazon Web Services.  Since I am an existing customer, I couldn’t get the one year free tier.  After setting up an instance, my first month’s bill was roughly $60, then $102.  Experiment over.

Conclusion: AWS is too expensive.

RackSpace Cloud Hosting

Finally, I tried RackSpace Cloud Hosting (formerly SliceHost).  RackSpace offers a service similar to Amazon’s EC2 service, complete with on demand provisioning of systems.  Refreshingly, they offer Gentoo.

So far, the virtual image has only gone down when I rebooted the image.  I can highly recommend RackSpace.


If you are building an SAS application and don’t have any customers or traffic, I would recommend shared hosting.  Most likely, the hosting provider will have a bigger pipe to the internet and power backup.   It will also be cheaper than a dedicated setup or even a virtual machine.

There is no reason to spend a lot of money until you start getting some traction.

Thereafter, I’d step up to a virtual host and finally, if I was making money I’d opt for a dedicated server.

2008 Mac Pro (3,1) Upgrades on Deck

Since I’m going to have to repair the Mac Pro, I’ve decided to throw in a few upgrades.  In addition to the ATI Radeon 5770 upgrade, I’ve decided to add 16GB of RAM and two Solid State Disk drives (SSDs).

While most of the hardware arrived today, I’m still waiting for the replacement video card, which should arrive Wednesday.  Once the video card arrives, I’ll crack open the dormant Mac Pro and start stuffing in the upgrades.

I had originally planned on spending all my money on a brand new Mac Pro, and turning the 2008 Mac Pro into a virtualization server, but I’ve decided to refurbish my existing 8-core mac instead.

I’ve been holding off from putting any money into my aging Mac Pro primarily because of two issues.   If you compare the 2008 Mac Pro to the newer versions, two things are immediately problematic when upgrading – the crippled SATA I/O speeds and expensive memory.

The 2008 Mac Pro is hampered with relatively expensive memory, when compared to newer versions.  You must get the 800Mhz ECC FB-DIMMS in matched pairs.  On a side note, contrary to what many people will say, you can run non-ECC memory, but you must run all of the memory non-ECC.

Since Apple no longer stocks or sells the memory, so you have to find out where you can purchase it.  OWC sells 16GB in 2GB modules for a draw dropping $429.99.   Conversely, if you have a 2011 Mac Pro, 16GB will run a $154.99.   After a lot of searching, I was able to order 16GB from Nemix, for $264.88.

The next decision was what SSD to purchase.  As I’ve noted, the 2008 SATA controller is stated to theoretically handle 3Gb/second.  However, given some design decisions by Apple, the actual throughput is less than that.  Therefore, it doesn’t make sense to put in the fastest, most expensive SSD.

Apple is selling a 512GB drive for a jaw dropping $749, plus local taxes.  I decided to go cheap — opting for two cheaper Samsung 840 SSDs –  a small dedicated SSD for the operating system and a dedicated SSD for data, for approximately $97 each.

Lazy, Arrogant, and Incompetent System Administrators and Network Administrators

While the vast majority of IT professionals I have worked with over the years are professional and competent, in the last several years I’ve met some most arrogant, lazy, and incompetent people I have the displeasure of ever meeting.

Today, I had a run-in with one of those people.  And try as I may, I just have to rant, because I’m so disappointed that these people are able to keep their jobs.

Here’s the thing.  When I first started my career (when dinosaurs roamed the earth), I sat on both sides of the fence – both in IT and software development.  I ran backups, wrote code, reset passwords.  I dealt with users.  I know the pain of dealing with people who don’t know and don’t care.

Later on I moved solely to developing software and haven’t looked back.

Truth is, too many administrators are jerks.  It wasn’t always this way.  Here is a tiny sample of some of the idiocy I’ve had to deal with lately:

I was told if I plugged in my MacBook Air into the corporate network, I would be fired, because Apple wasn’t supported by IT.  Never mind the fact that we were developing an iPhone application.  Even after someone clued him into the fact that you can only compile iOS code with a Mac, he still stuck to his guns.  I gave up and stopped brining my development laptop.  In frustration, they attempted to outsource to iOS project overseas.   A year later, the product still wasn’t released.

While developing an embedded device which ran Linux, I was told that Linux wasn’t supported and I couldn’t plug any Linux hosts into the network.  When it was explained that we would need Linux to develop software for, um, Linux, the solution was to purchase every developer a second PC, which could only be connected to a “test” network, which wasn’t routed to the Internet.  We resorted to having two NICs in the windows hosts, so we could copy patches and code to the Linux hosts.

When the IT guy found out about the second NICs, he flew into a rage and swore that having more than one Ethernet card in a PC would cause it to bridge the two networks together.  Fortunately, he had lost all credibility by this point.

Additionally, while he was hunting down a rouge DHCP server he spied that we had Linksys switches in our cubicles.  Another IT rage ensued.  The solution to the imagined problem was to throw away every purchased switch, and purchase every single developer a cisco rack mountable switch.

But nothing can top what happened today.  I’m working at a large multi-national corporation.   I was developing code on CentOS 6, but my manager wanted me to install and use a customized RedHat Enterprise kickstart image on a server, created by “IT.”  Before I continue, let me say that the butchered RHEL version is already end of life.’

While the CentOS image installed without a problem, the RHEL image had a multitude of problems from the get go.  They had patched the system to authenticate against an active directory server (that worked), and mount NFS shares over the system directories (that didn’t work).  Unfortunately, the IT guys needed to tell the NFS server that my new server was authorized to mount the file shares.

An easy fix.

Three days later, I get a call from the “unix group.”  The voice at the other end of the line was shaky with agitation.  At first, I thought maybe I was too verbose on the IT request.  However, it became clear that he had no intention of investigating the problem at all.

We don’t support that hardware, he explained.  He rattled off a very small list of supported hardware, all Dell PCs.  He inferred that we should toss out the Dell rack mount server and purchase a supported dell tower PC.  The hardware is identical to the workstation.

I was simply stunned.  What did the hardware have to do with DNS and NFS setup?

I fired off an email suggesting that I could help show the “unix group” guy how to configure the services if he needed help.  I sent the helpful email before I even thought that the unix guy might be insulted.
A terse response came back with over a half-a-dozen CC’d managers.   My team lead diplomatically scheduled a face-to-face meeting with some IT people.

In the mean time, I had access to a blessed tower configuration with the OS installed.  When I started to upgrade some of the packages, yum threw out thousands of errors when I tried to verify the yum repository.  Turns out that someone decided it would be a great idea to point yum at the CentOS repositories and updated most of the packages from there.

Yes, they were paying thousands of dollars for RHEL licenses, and were running CentOS servers.


Building a Kick-Ass Mac Mini Build/Integration Server for iOS, Android, Blackberry and Mac Development, Part I


This is the first of a series of blog posts where I walk you through the process of turning an old mac mini into a kick ass swiss army knife build integration/scm server.


Several years ago, I purchased a mac mini to run as a dedicated web server.  The server worked well, but issues with my ISP caused me abandon self hosting and move my virtual domains to the cloud.  Since then, the mac mini has been quietly sitting on the shelf, until now.

A few weeks ago, I purchased a Synology RS812 NAS appliance and quickly moved all of my subversion and git repositories to it.  However, I started to wonder if having my life’s work stored on the NAS was really a good idea.  If the drives got corrupted, I would loose everything.

I started to spec out a new server to throw in my 12U rack.   I spend hours poring over specs for server cases and power supplies looking for the quietest ones I could find.  I quickly came to the conclusion that a passively cooled system would be slower than my existing mac mini.

I pondered.  Maybe I could use the mac mini as a subversion server, mirrored to the subversion repositories on the NAS.  Then idea struck me – why not turn the mac mini into a dedicated build server? 

I purchased OS X Server ($20) and then I started to realize how incredibly useful this mac mini could be. By spending $20 for OS X server and $139 for a new disk drive I get all of the following:

•    Software Update Caching.  Right now each of my macs will poll the apple server to check to see if there is a software update.  If there is one, it will download it directly from apple, for each mac.  OS X Server has a service that will download the updates exactly once, and each of your macs will install the update from the server, saving bandwidth on your Internet connection.

•    LDAP.  With the LDAP server, you can have a single network login for all of your mac, mac books, etc.

•    Provisioning.  The server can control iOS devices and macs, pushing down developer certificates.  This is a big win for an iOS developer.

•    Source code repositories.  The mini will host subversion and git source code repositories.  The subversion repositories will be mirrored to the NAS, so at any point I have two copies of my subversion repositories in sync.

•    Build server.  With the mac mini, I will be able to build mac, iphone, ipad, blackberry and android applications.  More importantly, with Jenkins, I can also have a build slave running on Windows.

•    And lastly, you get a crap wiki server.

Upgrading the Hardware (the Path to Hell is Paved with Good Intentions)

After upgrading the mac mini to Mountain Lion, the performance was terrible.  Granted, the mac mini’s performance was never great to begin with, by any measure, but I needed more speed.

So, I drove to the store and picked up a 1TB Seagate Solid State Hybrid Hard drive (SSHD). I cracked open the mini, admired the delicate craftsmanship, and then installed the drive.  That is when my plan nearly derailed.

I went searching through my boxes for the original DVDs that came with the mac mini and installed the OS.  However, I forgot that the app store didn’t come out until after Snow Leopard was released.  Therefore, if I wanted to upgrade to Mountain Lion, I’d have to upgrade to Snow Leopard first. The only problem was that after several hours of searching for the Snow Leopard installation DVDs, I simply couldn’t find them.
I downloaded the Lion installation package and copied it to the mini and tried to install.  The installer refused, stating that the OS was too old, I would have to upgrade to Snow Lion first.

After another futile attempt to find the missing installation DVDs, I tried to use the migration assistant (which doesn’t copy the operating system files).  Time machine was no good either.

It a fit of desperation, I was able to burn the installer to a USB key.  With that, I was able to boot and finally install Mountain Lion on the mac mini. 

Each step above wasted several hours, and when you can only devote several hours a night to this side project, this adventure took nearly a full week to run its course. 

Now, I can honestly say that purchasing the Seagate SSHD was well worth it.  It may not be the fastest drive on the market, but it is bounds and leaps faster than the 5400rpm Toshiba drive.  The system boots fast and is actually usable now.

The only problem is, I broke the heat sensor connector when reinstalling the drive.  The connector sheared off the board.  I was able to solder the sensor cable directly to the pads, but the fan is running wide open now.   There are two options to fix this, which I will cover later when I finish building the server and come back to it.

Installing OS X Server, XCode

Next, I installed the $20 OS X server app.   This app overlays on top of your operating system. 

Turn on the caching server and then update your system from within the Server app.  Launch the appstore and update software.  You will start to see that the caching service starts downloading the files and starts using disk space.

Next, I downloaded XCode.  Once XCode is downloaded, run the app and then go to the XCode->Preferences and click on the downloads icon.  Download the command line tools.

Once this has finished, take a moment to admire what you now have done.   The following software packages have been installed without needing to compile anything: subversion, git, perl, php, ruby, python, and of course, apple’s XCode compilers.

That is enough for now, next time we can start setting up the Subversion server to mirror to another repository.


In Search of a Network Attached Storage (NAS) Nirvana

RS812When I first purchased my eight-core Mac Pro, I envisioned a beast of a development workstation with multiple virtual operating systems running simultaneously.  I wanted one computer that I could use to develop software in any language, for any operating system or embedded device.  I paid a small fortune for the best workstation I could get my hands on at the time.

For the most part, it worked fantastically well.  It allowed me to get rid of several PCs, and the detritus that accumulates when you continuously build and upgrade computer systems on a regular basis.  I no longer regularly trip over salvaged PC chassis and no longer have a stack of cables and drives on my bookshelf.

Better yet, I don’t have to save disk drives with boot images on it, or agonize about reformatting and blowing away an operating system that I previously installed (or fiddle with grub for multi-boot).  If I need to install a new Red Hat, Ubuntu, Gentoo, CentOS, or Windows box, I just provision a new virtual server in minutes and I’m done.  When I no longer need it, I can delete the virtual image.

However, there have been some problems with this setup. While the Mac Pro has been a workhorse, it hasn’t been totally pain free.  In the last five years I’ve had crashes, memory DIMM parity errors, freezes, a blown out ATI video card and lately, the desktop would hang.

For I while, I’ve planned on purchasing the next generation Mac Pro, which hasn’t been released yet.  I haven’t made any computer upgrades or purchases for well over a year waiting for the next Mac Pro, which may never come to pass.  My plan was simple, purchase the new Mac Pro and re-provision the existing Mac Pro as a file server, build server, code repository and more.

I’ve decided not to wait any longer and get a dedicated server appliance.


Rack mount.  I’ve run out of office space and have decided to pay a premium for rack mount hardware.  I have a 12 U Middle Atlantic wood laminate rack with some 2U shelves for most of my networking equipment.  In the future I planning on a full scale rack after we move to a new house.

Host source code repositories.  Most of my old source code projects have been mostly converted to git repositories.  I have both git and subversion repositories on my Mac Pro.  I want to move them to another server which is backed up frequently, which scripts to push up my git repositories which are hosted at GitHub.

Automated backup of Linux, Mac, and Windows machines. For our Mac machines, this means Time Machine support and AFP; for Linux rsync+ssh.  For windows, it means Samba support.

RAID support with the ability to expand.  Although I’m not 100% sold on the benefits of RAID for small businesses or home use, especially with desktop drives, I want RAID.

Power efficient.  I want the entire server to draw less than 60 watts.

Whisper quiet.  My rack is a laminate box with rack rails sitting two feet away.  I cannot tolerate a jet engine screaming server.

NAS Appliance Versus Server (Build Versus Buy)

The first decision was whether I wanted to setup a full blown server with a chassis that could accommodate a bunch of drives, or simply purchase a commercial NAS appliance.

In the past, I would have automatically opted for the more fun route – search the darkest corners of the internet for parts, assemble them and spend days fidgeting with a Dremel, installing Gentoo, cross compilers, and ultimately beaming with pride, after a healthy dose of profanity and self-inflicted pain.

After searching for passively cooled main boards and quiet rack mount chassis, I figured out pretty quickly that it would cost more and take more time for me to do it myself.

If power and noise wasn’t an issue, I would have a lot more options.

Decision: Off the shelf NAS appliance.

Which NAS?

Next, I got down and started researching NAS appliances, both rack mount and desktop.

Over the years, I’ve looked at NAS appliances and haven’t been keen to what I have seen.  Most were horrifically slow and overpriced.  However, in the last several years the performance and features have gone up and prices have gotten down to a level where I’m almost comfortable in slamming my money down on the table.

Drobo was plagued with problems with a large number of bad reviews floating around.  Some are happy, but others have suffered greatly at the hands of the proprietary BeyondRAID. I’m willing to bet that most of the problems probably have to do with users using desktop drives that don’t support TLER.  Worse yet, the performance reviews show the Drobo, like many NAS appliances, is pitifully slow.  Finally, the Drobo 5N is listed at $568.99 on, without drives.  The 8-baby version is a whopping $1,599.99 on Amazon.  And the tray to rack mount the device is another $200.  Too expensive.

Next, I narrowed the field down to QNAP and Synology.  It would appear as these are the two most revered NAS appliance companies, judging by the recommendations and reviews.

After exhaustive searches and weeks of analytical paralysis, I finally ordered  a Synology Rackstation 4-Bay 1U NAS, for $617.98.


Server Naming Conventions?

I’m in the process of provisioning a new server, so I decided to confront a challenging issue of: what do you name the computers?

A quick search led me believe that I’m not the only one who has wrestled with this problem.  There is even a service which tracks naming schemes (

What did I choose?  Football teams and towns?  Mythological deities?  Planets?   Cartoon characters?  Presidential pets?  Star Trek characters?  Famous Monopolists?  Classical Composers?  Sound effects?  Beer names?  Simpson characters?  Names of narcotic pain killers?  James Bond movie villains?

No, I’ve decided to use elements in the periodic table.  Servers are named after noble gasses; development and test machines are unstable elements.