Category Archives: Productivity

How to Stop your Synology NAS from Junking up your directories with @eaDir directories

If you start mounting Synology volumes over NFS, you will quickly learn that the Synology NAS drops directories cryptically named “@eaDir,” in every single subdirectory on your data volumes. 

They are hidden from Windows clients, but they are there.

The “@eaDir” directories are created for convenience by a system daemon, and they apparently contain image thumbnails or some such nonsense.  There is no easy or convenient way to turn them off or otherwise stop them from being created.

Getting rid of them takes some effort, and here is the easiest way – simply disable the system daemon.

Disable the Synology deamon the Creates the @eaDir directories

To stop the thumb service from creating the @eaDir directories, SSH into your NAS and stop the daemon.  This will keep new directories from being created until the next boot.

/usr/syno/bin/synomkthumb -stop

Next, to remove the service from starting up when rebooting, delete the script:

rm /usr/syno/etc.defaults/rc.d/S77synomkthumbd.sh

Removing the existing directories

SSH into your NAS and you can locate them by typing:

find /volume1/ -type d -name "@eaDir"

Finally, when you are feeling good, you can automatically search and delete them:

find . -type d -name "eaDir" -print0 | xargs -0 rm -rf

How to Breathe New Life into a Dying Mac Pro, On a Budget

As a technical mercenary, I find myself continually needing more computing power and more space, which is one reason I purchased the Mac Pro, to get ahead of the technology curve.

It has worked out beautifully, but more and more I find myself muttering, “if only I can get one more year out of it…”

Sadly, that is what I say about my 1999 F150 pickup truck.  It is now fifteen (15) years old, and keeps running year after year. 

But the idea is the same — pour just enough money into the system to keep it useful, until it absolutely needs to die.  And that is where I’m at with the Mac Pro.  Once I’m done with it, it will become a server relegated to the basement for another five years. 

Here is how I added some relatively simple upgrades to keep the rig running.

Storage

The best upgrade I can suggest is to upgrade to Solid State Disk drives (SSDs), bar none.   You won’t realize the full potential due to the limitations of the SATA controllers, but the speed increase will be dramatic nonetheless.  Also, make sure you are running Mavericks so that TRIM is supported.

If you are strapped for cash, get one SSD that is big enough to hold the operating system.  Also, get one that is at least 50% bigger than you think you will need.  I purchased one for the OS and one to put my user directories.  I’m constantly migrating data to my NAS in order to keep the system running.

Adding USB 3.0

My 2008 Mac Pro came with USB 2.0.  While looking at DAS solutions that might be used by the newer Mac Pro, I settled on USB 3.0. 

Pick up an Inateck PCIe USB 3.0 card.  The chipset is natively supported in OS X Mavericks, so no special drivers.

RAM

This was the first upgrade I did, the first year I purchased the system.  The stock 2008 systems shipped with a meager 2GB of RAM.

Those are the big three upgrades to refresh your Mac Pro.

[Linux] How to Send E-MAIL (or SMS) Whenever a User Logs In

In the past, I’ve written about some of the con artist masquerading as consultants, which I’ve run into during my travels as a technical mercenary.

At one gig, a younger and inexperienced team lead was conflicted about canning a developer that wasn’t even showing up for work, but who claimed to be working remotely.

Of course, I checked the logs and he never logged in.

The team lead wanted more data, so I suggested that whenever the developer logs in, the lead would get an email.

“You can do that?”  the lead asked. 

Easy.  The solution is to add a few lines to the shell init script in the user’s home directory.  In a few minutes, it was done.

This is also a nice way to shoot yourself an SMS message via your cell phone companies email-SMS gateway when someone logs into one of your cloud instances.  It will give you an immediate notification if someone compromises a system.

In any event, the solution is extremely easy.

Put something similar to the following in the user’s .profile (csh):

mail user@yourdomain.com << EOF
From: Linux System
To: youremail@address.com
subject: user login
user $LOGNAME has logged into `hostname`

EOF

The PC is not dead, we just don’t need new ones. Really?

Undeniably, there has been a major cultural shift away from desktops towards tablets and mobile devices.  But It seems every week another person defiantly declares that desktops are dead and “Post-PC” era is now upon us. 

“PC sales are declining, because people don’t need to upgrade.  They are good enough,” they emphatically chant in unison.  “PCs are dead.”   Translation — we don’t need to buy new ones – the ones we have are plenty good enough.

Hogwash. 

Personally, I’m tired of sensationalist articles about the PC market decline and making the conclusion that nobody likes or uses PCs anymore, or that they will disappear totally.

I’ve simply seen the pendulum swing too many times before.  I’ve seen the pendulum swing from mainframes, to PCs, to servers, to PCs, to tablets, to “the cloud,” and soon to the “internet of things” (or lots of insecure embedded devices and lots of insecure bloated software on network infrastructure). 

I see this logic applied in a dollars and cents way by corporate IT departments.  Every day, I’m struck by the quiet absurdity of giving expensive engineers inexpensive and painfully slow machines to ply their trades.  While I understand the lure of pre-negotiated discounts and standardized desktop deployments, I see first hand the waste in time, effort, and labor using slow computer systems that are “good enough.”

I’m using a 2008 Mac Pro, which I regularly bring it to its knees.  I’ve upgraded the RAM and upgraded to SSDs.  While the upgrades have definitely improved the machine, I will soon be upgrading to a bigger, better machine just as soon as I can.  

If all you are doing is writing PHP, JavaScript, and looking at Facebook, then yes, you don’t really need a fast machine, but please spare us from your prognostications and preponderances of the future. 

Back to Life Without Twitter

I just deactivated my twitter account and deleted any app that has any integration to twitter; I simply can’t take the spam any more.

Currently, spammers are raping twitter accounts and puking private twitter messages all over the place, with links to websites that contain malware.  I’ve had enough.

To be honest, Twitter’s popularity and usefulness have always been mysteries to me.  I’ve always been perplexed at people’s love of twitter.  I’ve found twitter to be so aggressively worthless, that I have assumed that I had missed something.

So I was left scratching my head when Twitter filed an S-1 with the SEC and has publically stated it would do an IPO on the NYSE.  It is rumored to end up with a jaw-dropping capitalization of $15 billion to $16 billion.  That is Billion with a B.

Someone is going to loose some money in that investment.

Searching for the Ultimate Keyboard for Programmers

imageEssentially, I make my living by typing.  For more than eight hours a day, I’m hammering out code, e-mails, specifications, documentations, and cajoling various operating systems into doing what I want.

So I’m always searching for the best keyboard.  But near as I can tell “best” is subjective at best.  Some people prefer the scissor-switches, while others prefer mechanical keys, buckling springs, or even membrane domes.

As a result, I’ve become somewhat of a keyboard snob.  I even bring my own keyboard to work when I start a contract, and take it with me when I leave.  I simply can’t type well on the $12 Dell keyboards.  It just doesn’t feel natural, and I’ve found that my typing suffers greatly when forced to use one.

So far, the best keyboard I’ve found is the Logitech Dinovo Edge keyboard, which clocks in at $179.99.   I purchased both the PC version and the Mac version, so I would have the same keyboard at work and at home.  They are identical except for the location of the command/windows buttons, which saves me from having to transition between keyboards – they have the exact same look and feel. 

The Logitech Dinovo Edge keyboard is a Bluetooth keyboard with a rechargeable battery (no cords), and it uses scissor-switches for a nice tactile feel, but feedback is somewhat nominal.  On the plus side, It is quiet.   Oddly enough, the PC keyboard’s lettering has slowly warn away as a I monkey-hammered code for months on end. The only downside is that it is difficult if not impossible to clean.

So when Jeff Atwood announced that he “designed” a new keyboard for coders, I was intrigued.

Jeff Atwood (of Stack Overflow fame) decided that he needed a new keyboard for his coding adventures, so he designed a mechanical keyboard that uses Cherry MX Clear mechanical keyswitches.  The keyboard is made by WASD, which is renowned for their custom mechanical keyboards.

I can totally understand why.  I absolutely hate the Chiclet keyboards produced by Apple, and moreover, I absolutely loath the Dell keyboards.  When I first purchased my Mac, the aluminum keyboard lasted about a week before I replaced it.  I really tried to get with aluminum Chiclet keyboard, but I just couldn’t.  I didn’t like the membrane keyboard.

But back to the CODE keyboard. 

I’ve been waiting for some reviews to come out.  They have sold out of the first production run, so I’m assuming that there are tons of programmers who worship at the altar of Atwood, I just haven’t seen any awesome CODE keyboard reviews.

In any event, I’m thinking of upgrading my keyboard to one with mechanical switches.   So stay tuned…

2008 Mac Pro (3,1) Upgrades on Deck

Since I’m going to have to repair the Mac Pro, I’ve decided to throw in a few upgrades.  In addition to the ATI Radeon 5770 upgrade, I’ve decided to add 16GB of RAM and two Solid State Disk drives (SSDs).

While most of the hardware arrived today, I’m still waiting for the replacement video card, which should arrive Wednesday.  Once the video card arrives, I’ll crack open the dormant Mac Pro and start stuffing in the upgrades.

I had originally planned on spending all my money on a brand new Mac Pro, and turning the 2008 Mac Pro into a virtualization server, but I’ve decided to refurbish my existing 8-core mac instead.

I’ve been holding off from putting any money into my aging Mac Pro primarily because of two issues.   If you compare the 2008 Mac Pro to the newer versions, two things are immediately problematic when upgrading – the crippled SATA I/O speeds and expensive memory.

The 2008 Mac Pro is hampered with relatively expensive memory, when compared to newer versions.  You must get the 800Mhz ECC FB-DIMMS in matched pairs.  On a side note, contrary to what many people will say, you can run non-ECC memory, but you must run all of the memory non-ECC.

Since Apple no longer stocks or sells the memory, so you have to find out where you can purchase it.  OWC sells 16GB in 2GB modules for a draw dropping $429.99.   Conversely, if you have a 2011 Mac Pro, 16GB will run a $154.99.   After a lot of searching, I was able to order 16GB from Nemix, for $264.88.

The next decision was what SSD to purchase.  As I’ve noted, the 2008 SATA controller is stated to theoretically handle 3Gb/second.  However, given some design decisions by Apple, the actual throughput is less than that.  Therefore, it doesn’t make sense to put in the fastest, most expensive SSD.

Apple is selling a 512GB drive for a jaw dropping $749, plus local taxes.  I decided to go cheap — opting for two cheaper Samsung 840 SSDs –  a small dedicated SSD for the operating system and a dedicated SSD for data, for approximately $97 each.

Google: GPAs are Worthless and So are Stupid Interview Questions

As a nomadic technical mercenary, I’ve been the victim of a lot of job interviews. I’ve been subjected to all manner of technical interviews, employment screens, and tests. Behavioral, group interviews, competency based interviews, panel interviews, phone interviews, unstructured interviews; I’ve been through them all.

This has given me some unique perspectives on the subject.

Luckily, as a consultant, the employer has had such a hard time hiring people within their process that I get to bypass much of the asinine processes. I get hired, and more often than not, I get offered a job later.

For a long time, I had some very strong opinions about how some of the major tech companies were interviewing, so much so that I would have never even consider applying to work for any of them.

This week, it seems Google came to the same conclusions, but from an employer’s perspective (Google: GPA’s are Worthless) after applying analytics to interviews and hiring outcomes.

I have to agree with Google’s conclusions.

I can’t tell you how many bad interviews I’ve had. Most people are naturally bad at interviewing, but some are truly terrible, especially when they want to be like Microsoft or Google.

For a while the fad was to ask questions that the interviewer didn’t know the answers to or inane brain teasers. Microsoft pioneered the interview puzzle, and the fad quickly spread to other tech companies.

Unfortunately, when you use puzzles as a litmus test in an interview, you end up basically hiring people who are good at puzzles. The world is full of PhDs who are intensely smart, but are completely impractical, and would rather mull over an academic problem than actually crank out working code and ship a product.

I’ve worked with these kind of people.

They are smart, but it takes the months to produce code. They end up researching linguistic constructs of a language to deeply understand things like why JavaScript in Internet Explorer 6 doesn’t work like Internet Explorer 8.

Some of my favorite interview questions:

My personal favorite interview question is, “where do you see yourself in five years?” I don’t get this one much anymore, but considering I have a lot of years of consulting experience, but I usually answer, “Well since this is a six month contract, probably working somewhere else.”

This year I interviewed for a large company that does security software and was asked, “how many sockets can a process have open at one time?” My answer was it depends on the OS, and then gave my best answer for Linux. The interviewer replied, “oh, I didn’t mean Linux, I meant OS/2,” and then told me about a customer support issue that occurred two decades ago.

I’ve only had two puzzles pitched at me during interviews, and it does nothing but frustrate the interviewer.

Sometimes it went like this:

“How many golf balls can you stuff into a 747?” The interviewer smirks.

“Is that a big problem here at XYZ Corp?” I ask, perplexed.

“It’s one of our standard interview questions. It gives us a way to see your problem solving methods.”

I sigh. “Well what is the galley configuration?”

“I don’t know, what do you mean?” he asks annoyed.

I explain, I’m a pilot and Boeing made several configurations, a cargo configuration, four-engine double decker…”

“oh the standard configuration.” I nod.

“Do you want the aircraft to fly?” Interviewers head tilts 30 degrees. “There are weight limitations and you have to be cognizant of the center of gravity. How much do your golf balls weigh? Are they standard golf balls? Do you want it to take off with full fuel? Fuel weighs…”

This goes on until the interviewer gives up and changes the subject.

Once I was asked “How many dimes stacked on top of each other would it take to reach the top of the Empire State building?” It was at the end of an all day gauntlet interview.

My response: “That is a stupid question. The dimes would never be able to be stacked anywhere high enough to reach to desired height. A slight breeze would topple the stack. And I think dimes aren’t uniform in thickness, so they won’t stack very well, unless you glued them together. If you glued them together, the easy answer is it would be the desired height divided by the thickness of the dimes, provided you could secure them properly to keep them from toppling. However, you would probably have to have enough stuck down in the ground, and made sure the glue was strong enough so it didn’t sheer off, because if it wasn’t perfectly vertical the dime-pole would topple and probably kill someone.”

They decided to go with another candidate.

Building a Kick-Ass Mac Mini Build/Integration Server for iOS, Android, Blackberry and Mac Development, Part I

 

This is the first of a series of blog posts where I walk you through the process of turning an old mac mini into a kick ass swiss army knife build integration/scm server.

Introduction

Several years ago, I purchased a mac mini to run as a dedicated web server.  The server worked well, but issues with my ISP caused me abandon self hosting and move my virtual domains to the cloud.  Since then, the mac mini has been quietly sitting on the shelf, until now.

A few weeks ago, I purchased a Synology RS812 NAS appliance and quickly moved all of my subversion and git repositories to it.  However, I started to wonder if having my life’s work stored on the NAS was really a good idea.  If the drives got corrupted, I would loose everything.

I started to spec out a new server to throw in my 12U rack.   I spend hours poring over specs for server cases and power supplies looking for the quietest ones I could find.  I quickly came to the conclusion that a passively cooled system would be slower than my existing mac mini.

I pondered.  Maybe I could use the mac mini as a subversion server, mirrored to the subversion repositories on the NAS.  Then idea struck me – why not turn the mac mini into a dedicated build server? 

I purchased OS X Server ($20) and then I started to realize how incredibly useful this mac mini could be. By spending $20 for OS X server and $139 for a new disk drive I get all of the following:

•    Software Update Caching.  Right now each of my macs will poll the apple server to check to see if there is a software update.  If there is one, it will download it directly from apple, for each mac.  OS X Server has a service that will download the updates exactly once, and each of your macs will install the update from the server, saving bandwidth on your Internet connection.

•    LDAP.  With the LDAP server, you can have a single network login for all of your mac, mac books, etc.

•    Provisioning.  The server can control iOS devices and macs, pushing down developer certificates.  This is a big win for an iOS developer.

•    Source code repositories.  The mini will host subversion and git source code repositories.  The subversion repositories will be mirrored to the NAS, so at any point I have two copies of my subversion repositories in sync.

•    Build server.  With the mac mini, I will be able to build mac, iphone, ipad, blackberry and android applications.  More importantly, with Jenkins, I can also have a build slave running on Windows.

•    And lastly, you get a crap wiki server.

Upgrading the Hardware (the Path to Hell is Paved with Good Intentions)

After upgrading the mac mini to Mountain Lion, the performance was terrible.  Granted, the mac mini’s performance was never great to begin with, by any measure, but I needed more speed.

So, I drove to the store and picked up a 1TB Seagate Solid State Hybrid Hard drive (SSHD). I cracked open the mini, admired the delicate craftsmanship, and then installed the drive.  That is when my plan nearly derailed.

I went searching through my boxes for the original DVDs that came with the mac mini and installed the OS.  However, I forgot that the app store didn’t come out until after Snow Leopard was released.  Therefore, if I wanted to upgrade to Mountain Lion, I’d have to upgrade to Snow Leopard first. The only problem was that after several hours of searching for the Snow Leopard installation DVDs, I simply couldn’t find them.
I downloaded the Lion installation package and copied it to the mini and tried to install.  The installer refused, stating that the OS was too old, I would have to upgrade to Snow Lion first.

After another futile attempt to find the missing installation DVDs, I tried to use the migration assistant (which doesn’t copy the operating system files).  Time machine was no good either.

It a fit of desperation, I was able to burn the installer to a USB key.  With that, I was able to boot and finally install Mountain Lion on the mac mini. 

Each step above wasted several hours, and when you can only devote several hours a night to this side project, this adventure took nearly a full week to run its course. 

Now, I can honestly say that purchasing the Seagate SSHD was well worth it.  It may not be the fastest drive on the market, but it is bounds and leaps faster than the 5400rpm Toshiba drive.  The system boots fast and is actually usable now.

The only problem is, I broke the heat sensor connector when reinstalling the drive.  The connector sheared off the board.  I was able to solder the sensor cable directly to the pads, but the fan is running wide open now.   There are two options to fix this, which I will cover later when I finish building the server and come back to it.

Installing OS X Server, XCode

Next, I installed the $20 OS X server app.   This app overlays on top of your operating system. 

Turn on the caching server and then update your system from within the Server app.  Launch the appstore and update software.  You will start to see that the caching service starts downloading the files and starts using disk space.

Next, I downloaded XCode.  Once XCode is downloaded, run the app and then go to the XCode->Preferences and click on the downloads icon.  Download the command line tools.

Once this has finished, take a moment to admire what you now have done.   The following software packages have been installed without needing to compile anything: subversion, git, perl, php, ruby, python, and of course, apple’s XCode compilers.

That is enough for now, next time we can start setting up the Subversion server to mirror to another repository.

screen-shot-2013-02-25-at-12.57.36-pm_130x100

configure generates .infig.status: error: cannot find input file:

I ran into this today.  When running configure, “.infig.status: error: cannot find input file:” error was generated:

bash-3.2$ ./configure
checking for a BSD-compatible install… /usr/bin/install -c
checking whether build environment is sane… yes
checking for a thread-safe mkdir -p… /bin/mkdir -p
checking for gawk… gawk
checking whether make sets $(MAKE)… yes
checking for g++… g++
checking for C++ compiler default output file name… a.out
checking whether the C++ compiler works… yes
checking whether we are cross compiling… no
checking for suffix of executables…
checking for suffix of object files… o
checking whether we are using the GNU C++ compiler… yes
checking whether g++ accepts -g… yes
checking for style of include used by make… GNU
checking dependency style of g++… gcc3
configure: creating ./config.status
.infig.status: error: cannot find input file:

 

This appears to be caused by by having DOS style line endings in the configure script.

You should be able to use the dos2unix command or alternatively, the tr command:

$ tr -d “\15\32″ < configure > configure.new
$ mv configure.new configure

 

What causes this?  Most likely you have DOS style line endings in your configure.ac and/or Makefile.am.  Run dos2unix against them and then reautconf –install and then the configure script should be good to go.