Jun 142010
 

I was pondering the oil spill in the Gulf, my work in automata, my fascination with robotics, and my friends with boats in Pensacola. Then I had another one of my crazy ideas — Hopefully it’s crazy enough to attract some interest and maybe even get done — so I thought I’d share. (That’s what blogs are for right?!)

What if we (collectively) develop an open source project to build (or refit) a fleet of small autonomous boats to patrol the Gulf looking for oil to collect and separate from the water. Here are the key points:

  • The craft are small and slow moving so they are not dangerous. They should be just large enough to carry a useful amount of collected oil, and just fast enough to get out of their own way and survive in the ocean.
  • The control systems are a collection of relatively simple, dedicated, open-source components designed to fail safe. If one subsystem doesn’t get what it expects from another subsystem then the robot stops and waits (signals) for help. More sophisticated systems can interact with the simpler control subsystems for exotic behaviors– but the basics would be very close to “hard-wired” reflexes.
  • Broken parts can be easily swapped out. Upgrades are equally easy to deploy by replacing swappable components with better ones.
  • Each is equipped with a centrifuge and a scoop/skimmer. It’s instincts are to seek out oil on the surface and turn on it’s skimmer while it slowly moves through that patch of ocean. The centrifuge separates the oil from the water. The water goes back in the ocean, the oil goes into the tank.
  • When a robot finds oil it tells it’s friends via radio using GPS to identify it’s location. Along the way it can gather other data that it can get for free from it’s control system’s sensors such as temperature, wind data, an any other data from attached sensors.
  • The instincts of the robots are based on a collection of simple behaviors and reflexes (more later).
  • Each has an open tank in back where the separated oil is deposited. When the robot detects that it’s tank is sufficiently full (or that it otherwise needs service/fuel) it will drive toward a barge where it will wait in line for it’s tank to be pumped out and it’s fuel tank to be topped off.
  • It might even be possible to make solar powered versions that do not require fuel — they would sleep at night. This kind of thing might also be a backup system to get the robot to safety in case of a main engine failure.
  • Endurance and autonomous operation are key design goals. These do not need to be (nor do we want them to be) big or fast or even particularly efficient. The benefit comes from their numbers, their small size, their ability to collaborate with each other, and their “always on” attitude. Since they work all the time and do not require human intervention they do not have to be powerful— just persistent. Their numbers and distribution are what gets the job done.
  • Since the robots are unmanned there is little exposure hazard for people (or animals). Robots don’t get sick — they may break down, but they don’t care how toxic their environment is during or after they do their job. These in particular are ultimately disposable if they need to be.
  • The subsystems should be designed so that they can be used in purpose built craft or deployed in existing craft that are re-purposed for the task.

Instincts (Roughly in order of priority):

  • Robots prefer to keep their distance from anything else on the surface of the water. They can do this with simple visual systems (or expensive LIDAR, or whatever folks dream up to put on their bot). Basically, if it doesn’t look like water they don’t want to be near it — unless, perhaps, it’s oil on top of water.
  • Robots prefer to stay within a minimum depth of water. The more shallow the water gets the more the robot wants to be in deeper water. The safety limits for this can be partially enforced by separate sub-systems but the primary goal is for the robots instincts and natural behaviors to automatically achieve the safety goals “as a matter of habit.”
  • Robots like to be closer to other robots that are successful — but not closer than the safe distance described earlier. If they get too close to something then the prior rule takes over. This allows the robots to flock on a patch of oil without running into each other. They will also naturally separate themselves in a pattern that optimizes their ability to collect oil from that patch. As a matter of safety they will also stay away from other vessels even (perhaps especially) if they don’t act like other robots.
  • Robots like to be in places they have not been before. This instinct causes them to search in new places for oil.
  • If a robot can’t get close enough to a patch of oil because other robots have already flocked there then the robot will eventually stop trying and will go search somewhere else.
  • Robots like to be closer to shore (but not too close -see above) rather than farther away. This gives the robots a tendency to concentrate on oil that is threatening the coast and also minimizes the possibility that the robot will be lost in the deeper ocean. Remember the other rule above about keeping their distance from everything— that will keep them from getting too close to shore too. “Close to something” includes being in water that is too shallow.
  • Robots shut down if anything gets too close to them. So, if they malfunction and get close to something else, OR, of someone else gets close to them then their instinct is to STOP. This behavior allows authorities to approach a robot safely at any time for whatever purpose.

What I envision here is something that can be mass produced easily by anybody with the will and facilities to do it. All of the hardware and software components would be open-sourced so that they can be refined through experience and enhanced by everyone who is participating.

It seems to me that the problem with the oil that is already in the Gulf is that it is spread over a very wide area and it is broken up into lots of small patches that are too numerous to track and manage from a central location.

A fleet of robust, inexpensive, safe, autonomous skimmers would be able to collectively solve this problem through a distributed intelligence. Along the way the same fleet would be able to provide a tremendous amount of information about conditions that is currently not available.

The design is simple, and the craft are expendable. Since each is collecting oil that is in the water, and shouldn’t be, if there is a catastrophic failure of a robot and it sinks then the result is that the oil it collected is back in the water. Not great, but also not worse than it was before the oil was collected in the first place.

If this idea catches on then I believe we (collectively) could produce huge numbers of these in a very short time – and each one would contribute to solving a problem that is currently not solvable. Also, as the technology is refined, the same systems would be available for any similar events that occur later… After all, the world is not going to stop drilling for oil in the deep oceans (or elsewhere) until it is all but gone. That is an unfortunate fact, in my opinion, but a fact none the less.

I believe also that the technology that would be developed through the creation of this fleet and the subsystems that support it would be useful for many other purposes as well… ranging from automated search and rescue to border patrol and anti-terrorism efforts.

This is a rough draft taken from the back of the envelope.

Let me know what you think!

I would love to work on a project like this. 🙂

I would love even more to see LOTS of folks working on this.

PS. Just before pushing the button I had another idea… (as I often do). What if the robots also had behaviors that allowed them to bucket-brigade oil toward collection points. So… if a slow moving robot could not possibly make it out to the barge from it’s station near the shore it would instead make a trip toward the barge and upon meeting up with one of it’s buddies it could hand it’s cargo off— Consider a kind of dance— the bot giving leads the bot that’s accepting — it dumps it’s cargo into the water just ahead of it’s buddy and it’s buddy scoops it up. At the very least the oil is farther from shore, and at best most of the transfer is completed safely without any single robot needing the range or speed required to make the entire trip to the collection point… In fact, this could be the primary mechanism— bots could dump their cargo in a collection area – a safe distance from the barge. Then other specialized equipment could safely collect it from there…

Apr 282010
 

No, I’m not kidding…

Race Conditions are evil right?! When you have more than one thread racing to use a piece of shared data and that data is not protected by some kind of locking mechanism you can get intermittent nonsensical errors that cause hair loss, weight gain, and caffeine addiction.

The facts of life:

Consider a = a + b; Simple enough and very common. On the metal this works out to something like:

Step 1: Look at a and keep it in mind (put it in a register).
Step 2: Look at b and keep it in mind (put it in a different register).
Step 3: Add a and b together (put that in a register).
Step 4: Write down the new value of a (put the sum in memory).

Still pretty simple. Now suppose two threads are doing it without protection. There is no mutex or other locking mechanism protecting the value of a.

Most of the time one thread will get there first and finish first. The other thread comes later and nobody is surprised with the results. But suppose both threads get there at the same time:

Say the value of a starts off at 4 and the value of b is 2.

Thread 1 reads a (step 1).
Thread 2 reads a (step 1).
Thread 1 reads b (step 2).
Thread 2 reads b (step 2).
Thread 1 adds a and b (step 3).
Thread 2 adds a and b (step 3).
Thread 1 puts the result into a (step 4).
Thread 2 puts the result into a (step 4).
Now a has the value 6.

But a should be 8 because the process happened twice! As a result your program doesn’t work properly; your customer is frustrated; you pull out your hair trying to figure out why the computer can’t add sometimes; you become intimately familiar with the pizza delivery guy; and you’re up all night pumping caffeine.

This is why we are taught never to share data without protection. Most of the time there may be no consequences (one thread starts and finishes before the other). But occasionally the two threads will come together at the same time and change your life. It gets even stranger if you have 3 or more involved!

The trouble is that protection is complicated: It interrupts the flow of the program; it slows things down; and sometimes you just don’t think about it when you need to.

The story of RTSNF and MPPE:

All of this becomes critical when you’re building a database. I’m currently in the midst of adapting MicroNeil’s Multi-Path Pattern Engine (MPPE) technology for use in the Real-Time Message Sniffer engine (RTSNF).

RTSNF will allow us to scan messages even faster than the current engine which is based on MicroNeil’s folded token matrix technology. RTSNF will also have a smaller memory footprint (which will please OEMs and appliance developers). But the most interesting feature is that it will allow us to distribute new rules to all active SNF nodes within 90 seconds of their creation.

This means that most of the time we will be able to block new spam and virus outbreaks and their variants on all of our customer’s systems within 1 minute of when we see a new piece of spam or malware in our traps.

It also means that we have to be able to make real-time incremental changes to each rulebase without slowing down the message scanning process.

How do you do such a thing? You break the rules!

You’re saying race conditions aren’t evil?? You’re MAD!
(Yes, I am. It says so in my blog.)

Updating a database without causing corruption usually requires locking mechanisms that prevent partially updated data from being read by one thread while the data is being changed by another. If you don’t use a locking mechanism then race conditions virtually guarantee you will have unexpected (corrupted) results.

In the case of MPPE and RTSNF we get around this by carefully mapping out all of the possible states that can occur from race conditions at a very low level. Then we structure our data and our read and write processes so that they take advantage of the conditions we have mapped without producing errors.

This eliminates “unintended” part of the consequences and breaks the apparent link between race conditions and certain disaster. The result is that these engines never need to slow down to make an update. Pattern scans can continue at full speed on multiple threads while new updates are in progress.

Here is a simplified example:

Consider a string of symbols: ABCDEFG

Now imagine that each symbol is a kind of pointer that stands in for other data — such as a record in a database or a field in a record. We call this symbolic decomposition. So, for example, the structure ABCDEFG might represent an address in a contact list. The symbol A might represent the Name, B the box number, C the street, D the city, etc… Somewhere else there is a symbol that represents the entire structure ABCDEFG, and so on.

We want to update the record that is represented by D without first locking the data and stopping any threads that might read that data.

Each of these symbols are just numbers and so they can be manipulated atomically. When we tell the processor to change D to Q there is no way that processor or any other will see something in-between D and Q. Each will only see one or the other. With almost no exceptions you can count on this being the case when you are storing or retrieving a value that is equal in length to the processor’s word size or shorter. Some processors (and libraries) provide other atomic operations also — but for our purposes we want to use a mechanism that is virtually guaranteed to be ubiquitous and available right down to the machine code if we need it.

The trick is that without protection we can’t be sure when one thread will read any particular symbol in the context of when that symbol might be changed. So we have two possible outcomes when we change D to Q for each thread that might be reading that symbol. Either the reading thread will see the original D or it will see the updated Q.

This lack of synchronization means that some of the reading threads may get old results for some period of time while others get new results. That’s generally a bad thing at higher levels of abstraction such as when we are working with serialized transactions. However, we are working at a very low level where our application doesn’t require serialization. Note also that if we did need to support serialization at a higher level we could do that by leveraging these techniques to build constructs that satisfy those requirements.

So we’ve talked about using symbolic decomposition to represent our data. Using symbolic decomposition we can make changes using ubiquitous atomic operations (like writing or reading a single word of memory) and we can predict the outcomes of the race conditions we allow. This means we can structure our application to account for these conditions without error and therefore we can skip conventional data protection mechanisms.

There is one more piece to this technique that is important and might not be obvious so I’ll mention it quickly.

In order to leverage this technique you must also be very careful how you structure your updates. The updates must remain invisible until they are complete. Only the thread making the update should know anything about the change until it’s complete and ready to be posted. So, for example, if we want to change the city in our address that operation must be done this way:

The symbols ABCDEFG represent an address record in our database.
D represents a specific city name (a string field) in that record.

In order to change the city we first create a new string in empty space and represent that with some new symbol.

Q => “New City”

When we have allocated the new string, loaded the data into it, and acquired the new symbol we can swap it into our address record.

ABCDEFG becomes ABCQEFG

The entire creation of Q, no matter how complex that operation may be, MUST be completed before we make the higher level change. That’s a key ingredient to this secret sauce!

Now go enjoy breaking some rules! You know you want to 🙂

Mar 232010
 

The Direct Sound EX-29, extreme isolation headphones absolutely live up to the hype. Bleed is non-existent; they are comfortable; they are clear; and they are very quiet. I’ve been using these in the studio for a few days now and I don’t know how I ever lived without them. Really- they are that good!

I try to spend a good deal of time behind the kit if I can swing it – just for fun, but also working out drum tracks for new songs, and of course, recording new material. These headphones shine in all of these applications.

Just Jammin’:

When I’m just jammin’ and keeping my chops up these cans help me keep everything at a sane volume which means I can work longer without fatigue and without damaging my hearing. In the past I have used ear plugs of various types and they have all had a few critical drawbacks that the EX-29s don’t. Two that spring to mind are comfort and clarity.

[ What do you mean “clarity”… ear protection isn’t supposed to be clear anyway! ] I MEAN- ear plugs aren’t clear – ever! At least not in my experience. Nor are most other practical solutions.

If you’ve spent any serious time (multi-hour sessions) behind the kit with ear plugs you know what I’m talking about — You can’t hear what you’re doing and it really takes a toll on your subtlety. Most likely you got frustrated at some point and flicked the ear plugs across the room so you could hear again. (You did have them in at first didn’t you??!)

The EX-29s surprisingly don’t have this problem. One of the first things I noticed was how flat the attenuation was. After a few minutes in the relative quiet of the EX-29s I adapted and was able to hear everything – just at a lower level. This means I don’t lose crush rolls, ghost strokes, and cymbal shading for the sake of my hearing. Don’t get me wrong — it’s not perfect 🙂 but it is worlds better than any ear plugs I’ve ever used and the translation of subtlety has a big pay-off in that I don’t suffer any fatigue from trying too hard to hear what I’m doing.

Then there’s comfort. Of course phones of any kind are going to be more comfortable than plugs… but the EX-29s do better than that. They are truly comfortable even after more than a couple of hours. They don’t squeeze your head, and they lack that pillows-on-the-ears feeling that typically comes with good protection.

Writing:

When I’m working out new drum tracks I often spend hours trying things out. That means playing back scratch tracks, samples, and loops and playing along to find the right grooves and fills. I used to use my Sony MDR-V600s for this. I would try to keep things at a low level, or I might use a bit of cotton (if I thought of it)… but invariably things would eventually get out of control or I would get tired from fighting with it and would have to come back later.

The EX-29s have solved this problem for me. I don’t miss any of the clarity I get from my V600s AND I don’t need any cotton for the ears :-).

The first thing I noticed when I used the EX-29s was that I had to turn my Furman monitor system way down! (ok, 2-3 notches) Everything was still clear, and I could hear my playing along with the playback without struggling to adjust to unnatural muffling. Even better – I didn’t get frustrated with it and discard my protection!

Recording:

Recording sessions are where the EX-29s really come through. Once the mics are on and every sound matters there are several things that shine about the EX-29s. In no particular order:

The isolation is absolutely fantastic! I frequently play pieces that demand a lot of dynamic range (I’m an art-rock guy at heart). It’s surprising how sensitive the mics need to be when you want to capture the subtlety of such a loud instrument. Any bleed-through from the playback can destroy the subtlety of a quiet passage by forcing re-takes or necessitating the use of gating, expansion, and other trickery. It’s no wonder drums are so frequently sequenced these days– it boils down to time and effort (which means money).

The EX-29s truly solve the isolation problem in two ways. The attenuation of the shells is quite substantial but in addition to that the quality of the drivers is also fantastic! This combination means that you can achieve comfort and clarity at substantially reduced playback levels. Not only is your playback not likely to get into your mics, but it is also at a much lower level to begin with.

Do the math (I did) — you not only drop about 30db getting from the inside of the EX-29s to the outside; you also drop an additional 12-15db using lower levels in the first place. That’s 45db of effective isolation without struggling to adapt or building up fatigue trying to “hear it”. Compare that to what you’re doing now and chances are you’ll see a 20db advantage with the EX-29s – not to mention more comfortable and productive recording sessions.

I’ll admit it – When I first heard about the EX-29s I was more than a little skeptical. They just seemed too good to be true. When I finally broke down and ordered them it was with the attitude that I’d give them a shot and if (when) they didn’t quite cut it I would find some other use for them.

No longer – These EX-29s are the real deal. They have earned a permanent home in my studio. I’m glad I picked up the extra pair to hang on my book shelf so we won’t have to fight over who gets to use them 🙂

Mar 042010
 

Those trixy blackhatzes are making a real mess of things these days. The last day or so in particular has been a festival of hacked servers and exploited free-hosting sites. Just look at this graph from our soon-to-be-launched Spam-Weather site:


While spammers have always enjoyed exploiting free services they have been particularly busy at it the last few days. The favorites this time around have been webstarts and doodlekits. What makes sites like these so attractive to the blackhats is that there is virtually no security on the sites. Anybody can sign up for a new account in minutes without any significant challenges. This means that the entire process can be scripted and automated by the blackhats.

After they’ve used one URL for a while (and it begins to get filtered) they simply light up another one, and so on, and so on.

Some email administrators are tempted to block all messages containing links to free hosting sites — and for some that might be an option — but for PROs like us it’s not. There are usually plenty of legitimate messages floating around with links to free-hosted web sites so blocking all such links would definitely lead to false positives (unacceptable).

At ARM we have a wide range of defenses against these messages so we’re able to block not only on specific links but also on message structures, obfuscation techniques, and other artifacts that are always part of these messages. In addition to that our tools also allow us to predict what the next round of messages might look like so that even when they do change things up we’re often ahead of them.

No mistake about it though… it’s hard work!

It would be _MUCH_ better for everyone if folks that offer free hosting and other commonly exploited services (like URL shortening, blog hosting,  and free email accounts) would do a better job keeping things secure.

Feb 062010
 

Just after we moved here a dozen or so years ago we had a snow storm that was pretty good. It was quite an adventure.

At one point I had to abandon our car in a grocery store parking lot and walk home. On the final stretch of that walk I tried to take a short cut down the hill behind our house and had to abandon the attempt and go around — the snow was up to my waist and 5 minutes of effort would get you only a few meters progress. — I could see the house, and Linda could see me.. we waved, and I turned around to walk the rest of the way on the roads which were just a little better.

The lentil soup w/ ham was amazingly good after that long walk home to our cozy house. We still try to recreate that experience from time to time.

This storm is bigger than that, but we’re not going out in it except to shovel a bit and have some fun. This time we’re well prepared and perhaps a little less adventurous.  The boys are having a blast — I hope they’re building some happy memories along with their snow forts. I’m sure they are.

In the midst of all this I can’t help but think of the homeless though. The sleeping bags MicroNeil purchased for TOP arrived on Friday. The original plan was for them to go to DC this weekend. The weather had other plans — We’ll push to get them delivered as soon as possible after the storm. I know the folks at TOP are anxious too.

As the snow falls outside my office window my mind drifts back to home, to the boys playing outside, to the beauty of it, and the memories we’ll make of it.

This kind of snow is the stuff of legend… the kind of thing that only happens around here once or twice in your childhood and maybe a few times in your life. That keeps it special. For folks who live much north of here it’s probably just another snowy day.

For us here in the mid-Atlantic it happens just often enough; and when it does it’s an opportunity for everyone to pause and reflect – to change their lives for a few days, talk to their neighbors, have a few adventures, and make some memories – stories they can share.

To quote Ernest T Bass: “I was right there in it!”

If you’re here in it with us, or otherwise in similar circumstances, we wish you well and hope all of your adventures ultimately turn into happy memories.

The rest is pictures…

Feb 032010
 

Noise, Noise, Noise, Noise! grumbled the Grinch… and I feel his pain. One of the challenges of building a recording studio is noise. We live in a very noisy world.

One way we deal with noise is to put noisy things in a special room which can be isolated from the recording environment. Here at the Mad Lab we have a utility room where we keep our server farm, CD/DVD production robot, air-handler, and other noisy things. The trick is: How do we keep all that stuff quiet?

There are two things we want to do to this room: Reduce the noise inside the room as much as possible and then prevent whatever is left over from leaking out.

The first step to treating the room was to significantly increase the density of the walls. At the same time we wanted to increase the structural integrity of the paneling on the opposite side. What we did was to add a thick, dense layer of work-bench material to the outside of the wall directly behind the paneling (another story we’ll post later).

The next step was to add sound absorbing material to the inside of the room to absorb as much noise as possible (and convert it to heat). The thinking behind this is that the more sound we can absorb the less sound there is to bounce around the room and leak out.

In addition we decided to put physics to work for us and install this material so that it is suspended from the studs flush with the inside of the wall leaving an air gap between the insulation and the outer wall material. This accomplishes two things. The insulation on the inside surface  is mechanically isolated from the outer wall structure thus preventing any (most) mechanical sound transmission. Also the air gap represents an additional change in density so that any sound attempting to travel through the wall from the inside experiences at least three separate mediums (more on this in a moment).

We did some research and contacted our friends at Sweetwater to purchase some Auralex mineral fiber insulation. Then to make it easier to handle we had our friends at Silk Supply Company precision cut the material and manufacture fabric covered panels.

The custom made panels fit perfectly between the studs and leave a gap of about half an inch between them and the dense outside wall. When sound attempts to escape through the wall three things happen.

First a lot of the energy is absorbed into the mineral fibers — the fabric covering is acoustically transparent. This significantly reduces any echos inside the room and converts a good portion of the sound to heat. This effect is enhanced by the loose mechanical coupling of the installation. Since the panels are suspended from the front surface of the studs any mechanical energy that might be transmitted through the studs is first significantly attenuated as it travels through the mineral fibers to the edges.

Second, any sound that makes it through the  insulation escapes into the air gap where the change in density causes the sound to refract… well, sort of. The size of the gap is very small compared to the wavelength of most sounds so most of the effect is really a mechanical decoupling of the mineral fiber and the hard surface of the outer wall material.

Third, much of the sound in the air gap is reflected back toward the mineral fiber by the smooth, hard surface of the outer wall material. In addition the density of the material further attenuates whatever is not reflected.

Since one of my goals was to attenuate the noise inside the room (and for a number of other reasons) I didn’t want to go the more conventional route of adding thick layers of drywall.

In line with this, the fabric covering has a few additional benefits. To start with the installation is much easier to install and if need be it can be temporarily removed by pulling the staples and tugging the insulation out of it’s slot. This might be useful if I need to run any additional cabling, for example. In addition to that the fabric reinforces the mineral fiber and keeps it well contained so it doesn’t sluff off into the room over time.

As usual I enlisted Ian and Leo to perform the installation. They had a lot of fun exploring the change in acoustic properties by alternately talking in front of sections where they had installed the panels and sections where the panels were not yet installed.

Jan 032010
 

We’re doing a lot of cross-platform software development these days, and that means doing a lot of cross-platform testing too.

The best way to handle that these days is with virtual computing since it allows you to use one box to run dozens of platforms (operating system and software configurations) at once – even simultaneously if you wish (and we do).

Until recently we were outsourcing this part of our operation but that turned out to be very painful. To date nobody in the cloud-computing game quite has the interface we need for making this work. In particular we need the ability to keep pristine images of platforms that we can load on demand. We also need the ability to create new reusable snapshots as needed.

All of this exists very nicely in VMWare, of course, but to access it you really need to have your own VMWare setup in-house (at least that’s true at the moment). So I ordered a new Dell Power Edge 2970 to run at the Mad Lab with ESXi 4.

Hey Leo - Install that for me

Hey Leo - Install that for me

Around the Mad Lab we like to take every opportunity to teach, learn, and experiment so I enlisted Leo to get the server installed.

The first thing that occurred to me after it arrived is that it’s big and heavy. We have a rack in the lab from our old data center in Sterling, but it’s one of the lighter-duty units so some “adaptation” would be required. Hopefully not too much.

Mad Rack before the new server

Mad Rack before the new server

Another concern that I had is that this server might be too loud. After all, boxes like this are used to living in loud concrete and steel buildings where people do not go. I need to run this box right next to the main tracking room in the recording studio. No matter though – it must be done, and I’ve gotten pretty good at treating noisy equipment so that it doesn’t cause problems. In fact, the rack lives in a special utility room next to the air handler so everything I do in there to isolate that room acoustically will help with this too.

Opening the box we quickly discovered I was right about the size. The rail kit that came with the device was clearly too large for the rack. We would have to find a different solution.

The server itself would stick out the back of the rack a bit so I had Leo measure it’s depth and check that against the depth we had available in the rack.

As it turned out we needed to move the rack forward a bit in order to leave enough space behind it. The rack is currently installed in front of a structural column and some framing. Once Leo measured the available distance we moved the rack forward about 8 inches. That provided plenty of space for the new server and access to it’s wiring.

Gosh those rails look big

Gosh those rails look big

How long is it?

How long is it?

Must move the rack to make room

Must move the rack to make room

That solved one problem but we still had the issue of the rails being too long for the rack. Normally I might take a hack saw to them and modify them to fit but in this case that would not be possible – and besides: the rail kit from Dell is great and we might use it later if we ever move this server out of the Mad Lab and into one of the data centers.

Luckily I’d solved this problem before and it turned out we had the parts to do it this time as well. Each of these slim-line racks has a number of cross members installed for ventilation and stability. These are pretty tough pieces of kit though so they can be used in a pinch to act as supports for the front and back of a long server like this. Just our luck we had two installed – they just needed to be moved a bit.

I explained to Leo how the holes are drilled in a rack, the concept of “units” (1-U, 2-U, etc), and where I wanted the new server to live. Leo measured the height and Ian counted holes to find the new locations for the front and back braces.

Use these braces instead of rails

Use these braces instead of rails

Teamwork

Teamwork

Then Leo held the cabling back while I loaded the new server into the rack. We keep power cables on the left side and signal cables on the right (from the front). The gap between the sides and the rails makes for nice channels to keep the cabling neat… well, ok, neat enough ;-). If this rack were living in a data center then it wouldn’t be modified very often and all of the cables would be tightly controlled. This rack lives at the Mad Lab where things are frequently moved around and so we allow for a little more chaos.

Once the server is over the first brace it’s easy to manage. In fact, it’s pretty light as servers go. This kind of thing can be done with one person but it’s always best to have a helper.

Power Left, Signals Right

Power Left, Signals Right

Slides right in with a little help

Slides right in with a little help

Once the server was in place we tightened up the thumb screws on the front. If the braces weren’t in the right place this wouldn’t have worked because the screw holes wouldn’t have aligned. Leo and Ian had it nailed and the screws mated up perfectly.

Tighten the left thumb screw

Tighten the left thumb screw

Tighten the right thumb screw

Tighten the right thumb screw

With the physical installation out of the way it was time to wire up the beast. It’s a bit dark in the back of the rack so we needed some light. Luckily this year I got one of the best stocking stuffers ever – a HUGlight.

The LEDs are bright and the bendable arms are sturdy. You can bend the thing to hang it in your work area, snake it through holes to put light where you need it, stand it on the floor pointing up at your work… The possibilities are endless. Leo thought of a way to use it that I hadn’t yet – he made it into a hat!

HUGLight - Best stocking stuffer ever!

HUGLight - Best stocking stuffer ever!

Leo wears HUGlight like a hat

Leo wears HUGlight like a hat

Once the wiring was complete I threw the keyboard and monitor on top, plugged it in, and pushed the button (smoke test). Sure enough, as I feared, the server sounded like a jet engine when it started up. For a moment it was the loudest thing in the house and clearly could not live there next to the studio if it was going to be that loud… either that or I would have to turn it off from time to time, and I sure didn’t want to do that.

Then after a few seconds the fans throttled back and it became surprisingly quiet! In fact it turns out that with the door of the rack closed and the existing acoustic treatments I’ve made to the room this server will be fine right where it is. I will continue to treat the room to isolate it (that project is only just beginning) but for now what we have is sufficient. What a relief.

Within a minute or two I had the system configured and ready for ESXi.

It Is Alive!

It Is Alive!

The keyboard and monitor wouldn’t be needed for long. One of the best decisions I made was to order the server with DRAC installed. Once it was configured with an IP address and connected to the network I could access the console from anywhere on my control network with my web browser (and Java). Not only that but all of the health monitors (and then some) are also available. It was well worth the few extra dollars it cost. I doubt I’ll ever install another server without it.

Back in the day we needed to physically lay hands on servers to restart them; and we had to use special software and hardware gadgets to diagnose power or temperature problems – up hill, both ways, bare feet, in the snow!! But I digress…

Mad Rack After

Mad Rack After

After that I installed ESXi, pulled out the disk and closed the door. I was able to perform the rest of the setup from my desk:

  • Configured the ESXi password, control network parameters, etc.
  • Downloaded vSphere client and installed it.
  • Connected to the ESXi host, installed the license key.
  • Setup the first VM to run Ubuntu 9.10 with multiple CPUs.
  • … and so on

The server has now been alive and doing real work for a few days and continues to run smoothly. In fact I’ve not had to go back into that room since except to look at the blinking lights (a perk).

We’re doing a lot of cross-platform software development these days, and that means doing a lot of cross-platform testing too.

The best way to handle that these days is with virtual computing since it allows you to use one box to run dozens of platforms (operating system and software configurations) at once – even simultaneously if you wish (and we do).

Until recently we were outsourcing this part of our operation but that turned out to be very painful. To date nobody in the cloud-computing game quite has the interface we need for making this work. In particular we need the ability to keep pristine images of platforms that we can load on demand. We also need the ability to create new reusable snapshots as needed.

All of this exists very nicely in VMWare, of course, but to access it you really need to have your own VMWare setup in-house (at least that’s true at the moment). So I ordered a new Dell Power Edge 2970 to run at the Mad Lab with ESXi 4.

(LeoInstallThatForMe)

Around the Mad Lab we like to take every opportunity to teach, learn, and experiment so I enlisted Leo to get the server installed.

The first thing that occurred to me after it arrived is that it’s big and heavy. We have a rack in the lab from our old data center in Sterling, but it’s one of the lighter-duty units so some “adaptation” would be required. Hopefully not too much.

(MadRackBefore)

Another concern that I had is that this server might be too loud. After all, boxes like this are used to living in loud concrete and steel buildings where people do not go. I need to run this box right next to the main tracking room in the recording studio. No matter though – it must be done, and I’ve gotten pretty good at treating noisy equipment so that it doesn’t cause problems. In fact, the rack lives in a special utility room next to the HVAC so everything I do in there to isolate that room acoustically will help with this too.

Opening the box we quickly discovered I was right about the size. The rail kit that came with the device was clearly too large for the rack. We would have to find a different solution.

(GoshThoseRailsLookBig)

Clearly the server itself would stick out the back of the rack a bit so I had Leo measure it’s depth and check that against the depth we had available in the rack.

(HowLongIsIt)

As it turned out we needed to move the rack forward a bit in order to leave enough space behind it. The rack is currently installed in front of a structural column and some framing. Once Leo measured the available distance we moved the rack forward about 8 inches. That provided plenty of space for the new server and access to it’s wiring.

(MustMoveTheRackToMakeRoom)

That solved one problem but we still had the issue of the rails being too long for the rack. Normally I might take a hack saw to them and modify them to fit but in this case that would not be possible – and besides: the rail kit from Dell is great and we might use it later if we ever move this server out of the Mad Lab and into one of the data centers.

Luckily I’d solved this problem before and it turned out we had the parts to do it this time as well. Each of these slim-line racks has a number of cross members installed for ventilation and stability. These are pretty tough pieces of kit though so they can be used in a pinch to act as supports for the front and back of a long server like this. Just our luck we had two installed – they just needed to be moved a bit.

(WeWillUseTheseBracesInsteadOfRails)

I explained to Leo how the holes are drilled in a rack, the concept of “units” (1-U, 2-U, etc), and where I wanted the new server to live. Leo measured the height and Ian counted holes to find the new locations for the front and back braces.

(IanAndLeoInstallTheBackSupport)

Then Leo held the cabling back while I loaded the new server into the rack. We keep power cables on the left side and signal cables on the right (from the front). The gap between the sides and the rails makes for nice channels to keep the cabling neat… well, ok, neat enough ;-). If this rack were living in a data center then it wouldn’t be modified very often and all of the cables would be tightly controlled. This rack lives at the Mad Lab where things are frequently moved around and so we allow for a little more chaos.

(HoldPowerOnLeftSignalOnRight)

Once the server is over the first brace it’s easy to manage. In fact, it’s pretty light as servers go. This kind of thing can be done with one person but it’s always best to have a helper.

(SlidesRigthIn)

Once the server was in place we tightened up the thumb screws on the front. If the braces weren’t in the right place this wouldn’t have worked because the screw holes wouldn’t have aligned. Leo and Ian had it nailed and the screws mated up perfectly.

(TightenTheLeftThumbScrew) (TightenTheRightThumbScrew)

With the physical installation out of the way it was time to wire up the beast. It’s a bit dark in the back of the rack so we needed some light. Luckily this year I got one of the best stocking stuffers ever – a HUGlight.

(BestStockingStufferEver)

The LEDs are bright and the bendable arms are sturdy. You can bend the thing to hang it in your work area, snake it through holes to put light where you need it, stand it on the floor pointing up at your work… The possibilities are endless. Leo thought of a way to use it that I hadn’t yet – he made it into a hat!

(LeoWithHugLightOn)

Once the wiring was complete I threw the keyboard and monitor on top, plugged it in, and pushed the button (smoke test). Sure enough, as I feared, the server sounded like a jet engine when it started up. For a moment it was the loudest thing in the house and clearly could not live there next to the studio if it was going to be that loud… either that or I would have to turn it off from time to time, and I sure didn’t want to do that.

Then after a few seconds the fans throttled back and it became surprisingly quiet! In fact it turns out that with the door of the rack closed and the existing acoustic treatments I’ve made to the room this server will be fine right where it is. I will continue to treat the room to isolate it (that project is only just beginning) but for now what we have is sufficient. What a relief.

Within a minute or two I had the system configured and ready for ESXi.

(ItIsAlive)

The keyboard and monitor wouldn’t be needed for long. One of the best decisions I made was to order the server with DRAC installed. Once it was configured with an IP address and connected to the network I could access the console from anywhere on my control network with my web browser (and Java). Not only that but all of the health monitors (and then some) are also available. It was well worth the few extra dollars it cost. I doubt I’ll ever install another server without it.

Back in the day we needed to physically lay hand on servers to restart them; and we had to use special software and hardware gadgets to diagnose power or temperature problems – up hill, both ways, bare feet, in the snow!! But I digress…

(MadRackAfter)

After that I installed ESXi, pulled out the disk and closed the door. I was able to perform the rest of the setup from my desk:

  • Configured the ESXi password, control network parameters, etc.

  • Downloaded vSphere client and installed it.

  • Connected to the ESXi host, installed the license key.

  • Setup the first VM to run Ubuntu 9.10 with multiple CPUs.

  • … and so on

The server has now been alive and doing real work for a few days and continues to run smoothly. In fact I’ve not had to go back into that room since except to look at the blinking lights (a perk).

Dec 312009
 
Sniffy New Year 2010

Sniffy New Year 2010

A New day, A New year, A New decade, Another chance to make things better… To do something good in a sustainable way so that we can build on it and make a lasting difference.

One of the things I do is develop technology for filtering out bad email (spam, scams, viruses, “malware”). The goal is to protect people from the predators out there and help to make sure the Internet has a chance to achieve it’s potential for good.

Of course, doing that means that my team and I spend a lot of time wading through the worst stuff on the ‘Net. Honestly, sometimes I really hate that job – wallowing in humanities filth for hours on end can really bum you out.

What started as a nuisance has grown into something much more sinister. Today spam and other malware is produced largely by organized crime. Their “business” is well funded, sophisticated, and ranges from presenting you with uninvited advertisements to hacking your computer, money laundering, identity theft and fraud, all the way to human trafficking, cyber warfare and terrorism.

I invite you to view this TED talk on the intricate economics of terrorism:

http://www.ted.com/talks/loretta_napoleoni_the_intricate_economics_of_terrorism.html

As a result of this phenomenon everyone who provides services on the Internet must now spend a significant amount of money and effort to protect themselves and their customers. It has become a necessity.

It’s very depressing. I know I would like to spend that energy doing more positive work – not just holding back the darkness.

I don’t let that stuff keep me down, but thoughts like that float around in my brain with all of the others looking for ways to connect. Sometimes they connect in surprising ways and call me to start out in new directions.

The other day I was pondering all of this while shopping for a gift for my brother. He enjoys camping, and reading, and this year in particular he’s become interested in outdoor survival books (Man vs Wild kinds of stuff). I had picked up a book about surviving on K2 and was looking for something to add when I wondered into the camping isle and came face to face with a sleeping bag…

This wasn’t what I was looking for but it struck a nerve. Just recently I had made a live recording for Evergreen Church where they were interviewing some folks from TOP (Teens Opposing Poverty). The stories these folks told about living (surviving) on the streets of DC had stuck with me. Evergreen Church teens regularly work with TOP and the church has been collecting sleeping bags to donate to TOP for their next trip into DC.

Teens Opposing Poverty

Teens Opposing Poverty

Just then it occurred to me that I had another opportunity to do something good. As Steve Jennings (Executive Director of TOP) puts it: “Sleeping bags are like gold to homeless people… The need for sleeping bags never goes away.”

For the month of January MicroNeil will donate a new sleeping bag to TOP for every new customer that subscribes to Message Sniffer.

This is a way we can convert some of the darkness generated by the blackhats into light (and warmth) and hopefully make a difference when it matters most. It’s very cold on the streets of  DC in January –and this year we just had two feet of snow!

I’m also hopeful that this promotion will call more attention to TOP and efforts like it. TOP in particular is focused on engaging and connecting young people with homeless folks in a meaningful way– and reconnecting the homeless with their community. These connections are in many ways more important than providing critical services and materials because it’s the connections that translate into hope and opportunity.

http://www.teensopposingpoverty.org/
Dec 172009
 
High-Def-Spelling

High-Def-Spelling

How does an 8 year old do his spelling homework when he has access to a high-definition digital studio?

By voice-over of course!

We hit upon this idea when it became clear to us that ordinary spelling homework is, well, ordinary. We can do better than this I thought — and off to the Mad Lab we went with spelling words in hand.

Now Ian not only gets his spelling right; he also gets to do something most people never experience… When is the last time you did a voice-over in a real recording studio?

Here’s how it works: First we set up a recording session with all the bells and whistles. Then Ian reads his spelling words as clearly as he can and leaves enough space in between each word so that he’ll have time to write the words down when we play it back. Then we play it back for him (hopefully without stopping) and he writes them down. This is much more exciting than having one of us read his words to him – and there are added benefits.

In addition to spelling he is learning about the way things work in a studio — especially the recording process. This covers a lot of additional skills and knowledge. There’s science (mic positioning, setting levels), planning and following a process, teamwork, communications skills (not just the voice-over itself, but the interaction between the engineer and the performer too), patience, etc. He also learns what every voice-over artist learns— there is no hiding from that microphone! The details matter and so as he progresses he’s learning to speak clearly, to avoid making unnecessary noises, and to pay attention to the details.

He’s also having a blast with it! And so am I.

Dec 172009
 

Greetings earthlings! — I really should stop saying things like that, or the villagers might show up. But, what do you say on the Hello World! post of a new blog? Believe it or not, it’s not in the handbook.

No matter. It’s done now.

If I’m lucky, I’ll delete this and replace it with something better before anybody sees it … (hehehehe).