Think Different - Upgrading a "Sad" iMac

phil-schiller-625x350

During the spring announcements, Apple executive Phil Schiller said that about 600 million people are using computers over five years old and said "This is really sad." The comment made a lot of folks angry, especially those in despair over the ever rising cost of Apple hardware with seemingly less functionality. Along with several of my co-workers (all Apple fans, including myself) there has been chatter of what the company's future holds. Today I'd like to tell the brief story of my Apple computing adventures and recent hardware upgrades I made to a 2007 iMac that have it running like a champ.

I was formerly a PC person, but during my undergraduate I was converted to Mac by the ease of developing code and doing scientific research in a *nix based environment. Macs were less prone to viruses, superior hardware that was designed to be compatible with superior software. I had no problem paying good money for this hardware, it lasted and lasted. I dove in with a white MacBook and 27" iMac in 2007. The MacBook had some cracking issues with the plastic that Apple was swift to repair for free. The iMac was flawless. I could upgrade the RAM easily. The hard drive wasn't so easy, but that's for later.

In 2011 I decided to buy a MacBook Pro laptop as my primary machine along with the beautiful 27" thunderbolt display. This was a very expensive trip to the mall! I still use the monitor daily and love it. The MacBook Pro did the majority of my computing, until late 2013 - wait what? Yes, due to a design flaw, the excessive heat from intense computation had cracked solder joints on the logic board. Others I know were plagued by this as well. It was an ~$800 repair to get back to working, but it could be expected to fail again. I set aside a several thousand dollar machine after not quite 3 years of service. I had just put a hybrid SSD in and more RAM to add insult to injury. Apple did nothing.

By this point in time I had a nice 2012 iMac at work that all of my "serious" computation happened on, so I bought a MacBook Air for my laptop/home machine. It's light and the battery life is unbeatable. I really like it, but in just a few years it has aged poorly. I'd like to get more RAM inside and a larger SSD. That's not possible since in an effort to make it thin, these components have been soldered to the logic board. There are no upgrades possible. Again... what? I can't take advantage of the rapid reduction in storage prices? Ugh.

Now to the point of the story. My wife has been using my 2007 iMac, the first one. It was getting slow and unfriendly to use, so I decided to max out the RAM (4Gb) and put in an SSD. For a couple hundred dollars in parts and an evening's worth of work, the computer runs like a champ. It's certainly not like a new machine, but it will last for several more years and after that would be a great machine for my 3D printer to use full time. It still runs Adobe Illustrator and other graphics programs. It's a perfectly usable machine. That's all because I can upgrade it. Sure, instilling the new hard drive was a PAIN. I had to pull the monitor out and ended up doing it twice because of a contact issue. But it worked and we got more life out of it!

2016-03-12 18.41.37

While in there, I noticed that one of the power supply filter capacitors was bulging and likely bad. Not a problem! This power supply and system was over-engineered. Sure the computer was a bit thicker than my new iMac, but hey - I didn't have to replace the power supply! Weight and size really count when sending things to space, but I'm pretty sure most Apple hardware stays firmly on the ground.

2016-03-12 18.41.32

What's the point of me telling you about my history of Apple purchases? Am I not biased since I am a MacBook Air, iMac, iPhone, iPad, Apple Watch owner? It's to say that I'm worried. I've always treasured Apple products because they just worked, they let me do my job with the least amount of friction. Recently I've spend more time fighting limitations of machines that I can't upgrade, battling bugs in software released to the public that Steve Jobs would have called "beta" at best. Am I ready to jump ship? No. Am I looking around? Yes.

As many of you know, I'm involved in a lot of engineering projects as well as pure geophysical research. A lot of engineering software like SolidWorks is designed to run on Windows. Some programming and hardware CAD/CAM tools I use are Windows only as well. I've generally run a virtual machine on my computers with Parallels. (Which is now a subscription model.) It is quickly becoming the case that I'll spend time in both operating systems everyday. I even bought a Microsoft Surface because the note-taking with a pen experience is so much better than Apple's offerings.

Many CAD tools are also beginning to offer Ubuntu Linux versions as well. I've even found these to be better maintained that the Mac native versions of such applications.

I used to say that I was going to build a desktop with amazing specs and run Ubuntu on it. I'd sure miss some of my Mac Apps though, things like TextExpander, OmniFocus, and OmniOutliner to name a few. I do have them on iOS as well. I'm very curious to see what happens in the next five years as things become increasing platform independent and cloud based. Will the switch between devices become seamless? Microsoft's surprise announcement about the integration of the bash shell into Windows 10 yesterday certainly showed that they are willing to cater to developers that Apple is beginning to limit. The multi-platform OneNote product is also exceeding anything Apple Notes can do.

Right now, I can't say where we'll all be in a few years. The fact that Apple used their most recent event to announce another size of screen and watch bands instead of new Mac hardware has me worried. Power users seem to be a non-priority for Apple since the over-priced and non-upgradable Mac Pro (trashcan version) was released in 2012. I need reliable, strong computers and I'm getting tired of carrying 2 laptops, a tablet, a phone, an iPad, etc. I'm hoping the situation becomes more clear in a few years when it will be time to retire a lot of my current hardware.

A final sidenote - I recent received an 8Tb hard drive for review. In the days of the tower Mac Pro I could have popped it into one of the many equipment bays, maybe added a new video card while I was there. Now I need to buy an external dock and takeup a USB-C port. Maybe I should look into NAS systems... Maybe I should build a tower at my workbench. How have you been thinking about your computing environment recently? I'm wondering if it's time to think different.

How Thick is the Crust?

Earth's Structure (Wikipedia)

Earth's Structure (Wikipedia)

I think the first time I really heard much about the Earth's crust was on the TV show "Bill Nye the Science Guy." (In fact, I was obsessive about not missing an episode as a child and I was ecstatic when I got to see Bill speak at Penn State last year.) He talked about earthquakes and Earth's structure, cut in with funny segments of a family telling their son, "Ritchie, eat your crust."

The crust in an interesting thing - it's what we live on top of and there are lots of interesting places where it's different due to geologic processes that concentrate certain types of materials. The crust is broken up into around a dozen major tectonic plates that move at about 4-6"/year. These plates are either oceanic or continental crust. Oceanic crust is generally relatively thin ~6 km (4 miles) and oceanic crust is much thicker at ~35 km (22 miles). The thin oceanic crust is also more mafic and dense than the felsic continental crust.

These differences create complex interactions when the plates meet each other at plate boundaries. We did a whole show on plate tectonics over at the Don't Panic Geocast recently, so if you'd like to hear about the discovery and arguments over plate tectonics you should check it out.

Today, I'd like to share a tool and Dr. Charles Ammon and I have made to visualize a crust model and allow anyone to explore the crust. All you need is Google Earth! We used a model called Crust 1.0 by Laske et al. that has how thick the crust (broken up into a few divisions) is for  64,800 points on the Earth along with some other crustal properties. That's every one degree of latitude and longitude! They put a lot of work into making this model. Generally we would use a Fortran program to get values out of the model, but Dr. Ammon had an idea to visualize the data in a more intuitive way with Google Earth. Over the Thanksgiving holiday I wrote a Python utility to access the model values and then we wrote a simple script that generates a Google Earth KML file based on the model.

All you have to do is head over to the project's GitHub page and click the "Download ZIP" button. While you're waiting on the download you can scroll down and read all about the development, the model, and find activities to try. Next, open folder you downloaded (most operating systems will automatically unzip it for you now) there will be several files, but the only one you need is the CRUST_1.0.kmz file.

Screenshot 2016-03-12 07.29.51

As long as you have Google Earth installed, double click that file and you'll see the Earth appear covered in red dots. If you zoom out too far, they will disappear though!

Screenshot 2016-03-12 07.31.49

Each red dot is a location where the model has the average crustal properties like how fast P and S seismic waves can travel and the density. All of these are explained in more detail on the project webpage. You should also try some of the projects we have listed there! As a starter, let's look at oceanic and continental crust and verify my assertion about their 3x thickness difference.

Clicking out in the Atlantic Ocean (make sure you are not on the continental shelf) we see about 13 km thick crust (the top of the mantle number). The water depth is also handy to have on-hand sometimes.

Screenshot 2016-03-12 07.35.15

Clicking well onto the North American plate we see about 36 km thick crust. Next you should head over to mountainous regions and basins and see how the structure of the crust is different - why is that? Sorry, no homework answers here!

Screenshot 2016-03-12 07.35.26

This is a really fun way to learn about the crust and a good reference tool as well! There are flyers in the docs folder that you can print to use as teaching aids or handout to students! We had a lot of fun making this 1-day project and hope that you'll explore it and let us know what you think! A big thanks to the folks that did the massive amount of work making the model - we just made it visible in Google Earth! Everything is open-source as always.

Build the PiBooth - Nuts and Volts Cover Story

nv

While getting wedding arrangements ready a few months ago, my wife commented that she would like a Photo Booth at the wedding. I immediately did what I always do, write a Python script. She was thinking more of a table with props and disposable camera approach, but I decided to make it into a project that we could use on any occasion and have some fun with the Raspberry Pi.

I searched online thinking that surely somebody had already done this project and posted their code and instructions. I found a few examples of Pi based photo booth projects, but none that had code attached (mostly they said "my code is awful, so I won't share") and none that did exactly what I wanted. I wanted the booth to count down, take multiple photos of the guests, and store/tweet the photos. I also wanted it to be simple to plug in and turn on with no experience required - same for shutdown.

After a few afternoon coding sessions, I had the basic code and guts of the project working. A little time with some wood tools and I had a pretty decent looking enclosure setup as well!

An initial prototype circuit to test the code for the PiBooth.

An initial prototype circuit to test the code for the PiBooth.

Rough cutting the hole for the photo booth screen in the shop.

Rough cutting the hole for the photo booth screen in the shop.

I wanted to make sure that everything I did was out in the open to be reproduced. I ended up deciding to try writing a magazine article about the build for the electronics hobbyist magazine "Nuts and Volts." The editor sent me some guidelines and after a few hours I had a draft article.  A couple of months passed by while we iterated on figures and ideas, but I was very excited when I was contacted saying that this article was being considered for the cover of the March 2016 issue. As you can see - it was selected! Be sure to grab a copy of Nuts and Volts (check your bookseller/news-stand) and read all of the details. You can grab the code over on the GitHub repository. Thank you to all the wonderful folks at the magazine for making this happen and thank you to my wife for letting me run wild with this project! Let me know if you build one, or a variant. The applications can range from parties/events to making an automated I.D. card station for your company!

Figure 1

Guts of the project.

Guts of the project.

Tracking Earthquakes Across the Globe - Travel Times

A few days ago I had the Epicentral+ app running on my iPad sitting on my desk and saw an event come on the screen. By looking at what stations were measuring ground movement first, second, third, etc. I could make a good guess at the event's location. Did you know that, with the data from a single seismic station, you can begin to guess the epicenter?

Generally earthquake locations are performed using many stations and algorithms that have been tweaked for years as we want to get ever more accurate locations. The USGS does this location for many events every day. It's fun to keep a live feed of the global seismic data up and look at the patterns. This is possible thanks to applications like "Earth Motion Monitor" and "Epicentral+", both products of Prof. Charles Ammon. They are worth installing and having a look. Prof. Ammon has seen the value in being able to watch signals for long periods of time: you begin to pick out patterns and get an intuitive feel for the response seen due to different events. While I don't have nearly the amount of insight possessed by experienced seismologists, I wanted to show you a quick and simple way to figure out about how far an event was from a given station. If you combine that with some geologic knowledge of where plate boundaries are, you can likely narrow down the region and earthquake type before anything comes out online.

The event I saw is a pretty small event, a magnitude 5.8 near the Fiji islands; it'll work for our purposes and not provide too much distraction. I've marked it with a white star on the map below (a Google Earth map with the USGS plate boundary file). This event occurred near the North New Hebrides trench, part of a slightly complex zone where the Australian plate is being pushed under, or subducted, beneath the Pacific plate.

Our earthquake in question marked with a white star near the North New Hebrides trench.

Our earthquake in question marked with a white star near the North New Hebrides trench.

Though the event was not huge, it was detected by many seismometers around the globe. In fact, there is a handy map of the stations with adequate signal automatically generated by IRIS. The contour lines on the map show distance from the event in degrees (more on that later).

Stations and their distance from the earthquake. (Image: IRIS)

Stations and their distance from the earthquake. (Image: IRIS)

I saved an image of assorted global seismic stations about an hour after the event occurred. You can see energy from the earthquake recorded on all stations, with some really nice large packets of surface waves (the largest waves on the plot).

raw_seismograms

We're actually interested in the first two signals though, the classic P and S waves. Let's take a closer look at the station in Pohakuloa, Hawaii. We can see the first arrival, the P-wave, then a few minutes later the S-wave. The P-wave (a compressional, basically a sound-wave) travels faster than the transverse S-wave, so they arrive at different times. We know the wave speeds with depth in the Earth, so by using the difference in time between these arrivals, we can come up with a rough distance to the event.

seismograms_annotated

A graph of distance vs. arrival times can tell us the whole story. I've made a simple version in which you can find the time we measured (7.25 minutes) on the x-axis, then lookup the distance on the y-axis. If we do this (marked in dashed black lines), we see that the distance should be about 50 degrees.

SmP_time_curve

That's not bad! I calculated the actual distance knowing the earthquake location and station location to be 49.6 degrees. The theoretical difference in travel time based on a simple Earth model is 7.14 minutes. The slight error is due to a complex real Earth, but mostly due to me picking a rough time on an iPad screen without really zooming in on the plot. The goal was to know about how far away the earthquake was from the station though, and we did that with no problem. Just from that information it was easy to tell that the event was in the Fiji region.

Distance is in degrees, which may seem a little strange. Since the Earth is a ball-like blob, defining distances across the surface is a little tricky when distances get large. It turns out to be more convenient to think of this distance as an angle made with the center of the earth. Take a look at the screenshot below. It's from a program called taup and shows the actual paths taken by the P and S waves through a cut-away of the Earth. I've marked the angle I'm talking about with the greek letter ∆. (We would formally say that this is the great circle arc distance in degrees. If you want to learn more about great circle arcs, you should checkout our two part podcast on map projections.)

taup_path_annotated

As scientists, we often look at a travel time plot a little differently. There are many different waves or "phases" that we are interested in, so plotting one line of  S-P wave arrival is rather limiting. Instead we plot a classic "travel time curve" where the arrival time after the event is plotted as a function of distance. I've reproduced one below (table of data plotted from C. Ammon).

travel_time_curve

We can make a plot like this from data too! Taking many stations, plotting them as a function of distance we get a plot like the one below. You can see curved and straight lines if you stand back and squint a little. Those are arrivals of different phases across the globe! Notice the lower curved line that matches the P-wave travel time above.

Notice the lines and curves made as different phases from the earthquake arrive across the globe. (Image: IRIS)

Notice the lines and curves made as different phases from the earthquake arrive across the globe. (Image: IRIS)

Like I mentioned, there are many different phases we can look at. To give you an idea of things a seismologist would look for, there is a version of the plot with a lot of the more complex phases marked on it below. I know it looks intimidating, but for this event, you'll see we really can't easily discern a lot of the phases. That's because this really isn't a huge event, but it's nice for us because that means the plot is easier to look at.

Arrival plot with phases marked. (Image: IRIS)

Arrival plot with phases marked. (Image: IRIS)

So there you have it, by remembering the rough travel time curves or posting one on your wall, you can quickly determine the approximate region an earthquake occurred in just by glancing at the seismograms!

Magnitude 7.1 Alaska Earthquake Visualizations

This morning there was a magnitude 7.1 earthquake beneath Alaska. Alaska is no stranger to earthquakes, and I'm not going to talk about the tectonics, but I wanted to share the ground motion videos I produced for the event. Also be sure to checkout the ground motion videos over at IRIS as well. At present no major damage or injury was reported. Though CNN did sensationalize the earthquake (as they always do):

Screenshot 2016-01-24 07.20.30

First a video from a nearby station, Homer, AK. About 8 mm maximum ground displacement with some pretty large ground accelerations.

The earthquake recorded in Australia. Not as exciting, but notice the packets of waves towards the end of the video, these are the surface waves that took the longer route around the globe compared to their earlier counter parts. (Called R1/R2 and G1/G2.)

Here's a central US station near where I grew up. Nice surface waves and a good example of what looks like the PcP phase (P-wave reflected off the outer core of the planet.) The PcP phase is at about 604 seconds, around 100 seconds after the P wave. In the figure below the movie, the approximate PcP path is red, the P path is black. Pretty neat!

Screenshot 2016-01-24 13.32.59

 

Squeezing Rocks with your Bare Hands

Our lab group. Photo: Chris Marone

Our lab demo group. Photo: Chris Marone

As frequent readers of the blog or listeners of the podcast will know, I really like doing outreach activities. It's one thing to do meaningful science, but another entirely to be able to share that science with the people that paid for it (taxpayers generally) and show them why what we do matters. Outreach is also a great way to get young people interested in STEAM (Science, Technology, Engineering, Art, Math). When anyone you are talking to, adult or child, gets a concept that they never understood before, the lightbulb going on is obvious and very rewarding.

Our lab group recently participated in two outreach events. I've shared about the demonstrations we commonly use before when talking about a local science fair. There are a few that probably deserve their own videos or posts, but I wanted to share one in particular that I improved upon greatly this year: Squeezing Rocks.

Awhile back I shared a video that explained how rocks are like springs. The normal demonstration we used was a granite block with strain gauges on it and a strip chart recorder... yes... with paper and pen. I thought showing lab visitors such an old piece of technology was a bit ironic after they had just heard about our lab being one of the most advanced in the world. Indeed when I started the paper feed, a few parents would chuckle at recognizing the equipment from decades ago. For the video I made an on-screen chart recorder with an Arduino. That was better, but I felt there had to be a better way yet. Young children didn't really understand graphs or time series yet. Other than making the line wiggle, they didn't really get the idea that it represented the rock deforming as they stepped on it or squeezed it.

I decided to go semi old-school with a giant analog meter to show how much the rock was deformed. I wanted to avoid a lot of analog electronics as they always get finicky to setup, so I elected to go with the solution on a chip route with a micro-controller and the HX711 load cell amplifier/digitizer. For the giant meter, I didn't think building an actual meter movement was very practical, but a servo and plexiglass setup should work.

A very early test of the meters shows it's 3D printed servo holder inside and the electronics trailing behind.

A very early test of the meters shows it's 3D printed servo holder inside and the electronics trailing behind.

Another thing I wanted to change was the rock we use for the demo. The large granite bar you stepped on was bulky and hard to transport. I also though squeezing with your hands would add to the effect. We had a small cube of granite about 2" on a side cut with a  water jet, then ground smooth. The machine shop milled out a 1/4" deep recess where I could epoxy the strain gauges.

Placing strain gauges under a magnifier with tweezers and epoxy.

Placing strain gauges under a magnifier with tweezers and epoxy.

Going into step-by-step build instructions is something I'm working on over at the project's Hack-a-Day page. I'm also getting the code and drawings together in a GitHub repository (slowly since it is job application time). Currently the instructions are lacking somewhat, but stay tuned. Checkout the video of the final product working below:

The demo was a great success. We debuted it at the AGU Exploration Station event. Penn State even wrote up a nice little article about our group. Parents and kids were amazed that they could deform the rock, and even more amazed when I told them that full scale on the meter was about 0.5µm of deformation. In other words they had compressed the rock about 1/40 the width of a single human hair.

A few lessons came out of this. Shipping an acrylic box is a bad idea. The meter was cracked on the side in return shipping. The damage is reparable, but I'm going to build a smaller (~12-18") unit with a wood frame and back and acrylic for the front panel. I also had a problem with parts breaking off the PCB in shipment. I wanted the electronics exposed for people to see, but maybe a clear case is best instead of open. I may try open one more time with a better case on it for transport. The final lesson was just how hard on equipment young kids can be. We had some enthusiastic rock squeezers, and by the end of the day the insulation on the wires to the rock was starting to crack. I'm still not sure what the best way to deal with this is, but I'm going to try a jacketed cable for starters.

Keep an eye on the project page for updates and if any big changes are made, you'll see them here on the blog as well. I'm still thinking of ways to improve this demo and a few others, but this was a giant step forward. Kids seeing a big "Rock Squeeze O Meter" was a real attention getter.

Hmm... As I'm writing this I'm thinking about a giant LED bar graph. It's easy to transport and kind of like those test your strength games at the fair... I think I better go parts shopping.

Mill Time - Back in the Shop

While home for the holidays, I decided to make a little calibration stand that I need for a tilt meter project I'm working on. Back in the 2006 time frame I had worked to learn basic machining skills on the mill and lathe. I never was amazing at it, but managed to get a basic skill set down. I ended up back over at my mentor's shop this week to make a simple part, but thought you may enjoy seeing some photos of a simple milling setup.

The first step is to have a part design that is exactly what you want to make. Problems always arise when you have a rough sketch and make it up as you go. For some hobby projects that can work, but as our systems become more and more complex, it generally just leads to wasted time, material, and lots of frustration. This particular part is exceedingly simple, but I went ahead and made a full 3D CAD model anyway, just to illustrate the process.

Our goal is to make a flat plate for a tilt meter to set on. We will then elevate one end of the plate a known amount with precision thickness pieces of metal called gauge blocks. Knowing the distance between the ends of the plate and the amount we elevate one end, we can very accurately calculate the angle. That lets me calibrate the readings from the tilt meter to real physical units of degrees or radians. All good designs start with a specification, my specification was I wanted at least 5 different tilts ranging from 0 - 0.5 degrees, the more combinations possible the better. I also wanted a compact and rigid device that wouldn't bend, warp, or otherwise become less accurate when tilted.

Time to fire up a Jupyter notebook and do some calculations! I mainly wanted to be able to play with the tradeoffs of baseline length, height of gauge block (they come in standard sizes), etc. After playing with the numbers some, I came with up a design that used multiple baseline lengths with available gauge blocks. I decided to use ball bearings under the plate to give nice point contacts with the surface of the table as well. This meant I needed a plate about 6" x 12" with hemispherical divots to retain the bearings.

Next, I fired up FreeCAD and made the design by taking a 6" x 6" plate and using 0.5" spheres as the cutting shape to make the divots. The divots are only 1/8" deep, so setting them in 1/4" from the edges is enough. Then I just mirrored that 6" x 6" part to make the full part. This lets me tilt both directions the same amount without turning or moving the instrument under test. The drawing I produced is shown in both bottom and oblique view.

bottom oblique

Next it was time to make the plate. I ended up with a piece of 0.5" thick 6061 Aluminum plate. We first cut it to roughly the size we wanted (slightly oversized) with a bandsaw. Then the plate was clamped down to the milling machine table to take off the extra material with a milling bit and give the sides a nice and clean finish. We ended up re-clamping during the work (almost always a bad idea) and had a slight taper on the width, but that isn't a concern for the usefulness. (By slight taper I mean about 20 thou along the length.)

We then were ready to make the divots. To do this we used a ball end mill that makes nice hemispheres. This is a very simple part, so just finding the edge, setting the readout, and doing the cuts took about 20 minutes. I've included some photos incase you haven't seen a milling setup before. It's really great fun to be able to control these cutters and tools to a thousandth of an inch and sculpt metal into what you need. As I said, this isn't a complex part, but that's good because I was a little rusty!

2015-12-30 10.15.19 2015-12-30 10.51.59

In the end we got a nice plate and I think it will perform its duty very well. I'll most likely write a future post showing it in use and explaining instrument calibration. I've included some pictures of the finished plate and how it will work sitting on the ball bearings.

2015-12-30 15.45.32

2015-12-30 15.46.50

Until next time, have a happy and safe new year!

3D Filament and Humidity - Why My Prints Keep Failing

Awhile back I talked about some weird issues with my 3D printer filament being damaged by UV radiation from the sun. I'm back with more stories of 3D printing though and my current attempt at solving the issue.

I was printing some parts and kept having issues with the layers coming apart and/or having a bubbly, uneven surface texture. I generally print with ABS plastic, even though others seem to have more issues with it, I've always had better luck than with PLA. I decided to try some PLA and also had problems with it sticking and with the filament becoming very brittle and shattering. This problem was slowly driving me crazy as I usually can get high quality prints with little fuss.

First off I moved the printer further away from the window to be sure no hot/cold convective air currents were interrupting the printing process. I even hung some cardboard sheets around the side of the print area. If I had the space I'd make a full enclosure for the printer to cut off all air currents from the room, but that will have to wait for awhile. (It would also dampen the noise, which is a bonus in an apartment!) I still was getting "bubbly" prints though.

Cardboard baffles taped onto the printer in an effort to reduce air currents near the print surface.

Cardboard baffles taped onto the printer in an effort to reduce air currents near the print surface.

After reading more online I decided that my filament must be too moist. The plastic is adsorbing moisture from the humid air and that turns to steam in the print head, causing little blow-outs and my bubbly texture. After consulting with a colleague that does a lot of printing, he confirmed that this is an issue and even cited his tests showing that filament over a few weeks old produced weaker prints. There are a few ways I can think of to help with the issue: 1) put filament in a bucket with a light bulb as a heater to keep the humidity low, 2) keep the filament in vacuum packs, 3) lock it in a low humidity environment with silica gel beads. Based on cost and convenience, I ended up going with the third option. While this technique won't give filament an infinite life, I was hoping to salvage some of mine.

I went to a craft store and bought a plastic tub that had a soft air/water tight seal; specifically the Ziploc Weathertight series container. I also ordered a gallon container of silica beads that are commonly used to keep products dry during shipping. While the products were on their way, I collected a bunch of plastic containers and drilled many small holes in them. When the beads arrived I filled the containers with them and placed them and my filament in the large box.

2015-11-01 20.30.35

In an effort to see how good of a job the silica beads were doing, I also taped a humidity indicator inside the box. I hadn't used these simple indicators before and had no idea how accurate they were, so I whipped up a quick sensor with a MicroView (Arduino) and checked it. To my surprise, it was dead on, even when exposed to the higher room humidity. If you only need 5-10% accuracy (like when seeing if the silica beads need to be baked because they are saturated) these seem to do the trick.

A close-up of the microview showing 17% RH inside my container.

A close-up of the microview showing 17% RH inside my container.

The humidity indicator also shows below 20%, matching the electronic sensor.

The humidity indicator also shows below 20%, matching the electronic sensor.

Once I verified that this solution might work, I put the rest of the filament and anything else I wanted to stay dry in the tub. Still lots of room left for future filament purchases, unpainted parts, and all of the surface mount sensors that need to be stored in a dry environment.

2015-11-01 20.30.28

After letting the filament sit in the box for a few days, I tried another print. To my surprise, there were no more blow-outs! I still have a problem with part of my print bed not adhering very well, but that's another story and another, currently only partially solved, mystery. For now, this box solution seems to have part of my 3D printing problems solved. I have noticed that old filament does produce weaker prints, so I'm going to start stocking less filament and print most things in a single color (probably just black and white unless a special need arises).

40 Years Ago - The Wreck of the Edmund Fitzgerald

SS Edmund Fitzgerald (Image Wikipedia)

SS Edmund Fitzgerald (Image Wikipedia)

While I normally post technical bits or project reports, occasionally we examine a historical event that happened "on this day" and see what is known about it. Today marks 40 years since the sinking of the freighter the SS Edmund Fitzgerald. The ship disappeared November 10, 1975 with no distress call in the middle of a severe storm on the great lakes. The prescribed route took her from Superior, Wisconsin, across lake Superior, and towards Detroit with a load of ore pellets.  In this post I'll quickly give a synopsis of the storm and conditions of the sinking. A lot of research, videos, articles, and more have bene done on this event, so rather than re-write it all, we'll give a sense of the general circumstances and conjecture some on the theories of what happened.

After leaving port, the Fitz was joined by another freighter, the Arthur M. Anderson, that looks nearly identical. The ships were not far apart for much of their journey and were within about 10 miles of each other at the time of last contact. While 10 miles is rather close, it was a huge distance in the storm that was coming upon them. It was a classic winter gale that had been expected to track further south. Winds rose, snow blinded the pilots, and the waves continuously grew from a few feet to rouge wave events of about 35 feet. Storms like this have frequently claimed ships on the lake, with the Whitefish Point area (where the Fitz sank) containing over 240 ships. The lake alone has over 7,000 wrecks and 30,000 lives claimed.

Winds and/or equipment problems rendered the radar on the Fitz inoperable about 4:10 PM. Captain McSorley slowed down to close the range with the Anderson to get radar guidance. Later in the evening, the captain reported that the waves were high enough that the ship was taking significant seas on deck. The ship had also developed a bad list (i.e. was leaning to one side). At 7:10 PM the Anderson called McSorley to ask how the ship was faring, he said "We are holding our own." Minutes later the Fitz disappeared from the Anderson's radar screen and was gone without a single distress signal. All 29 on board perished and none were recovered. This probably wound't be anything but "another shipwreck" to the general public if it hadn't been for the Canadian songwriter Gordon Lightfoot. After reading about the accident, he wrote a ballad that described what it might have been like to be on the boat. The song (below) hit number 2 on the charts in 1976 and immortalized the Fitz and her crew.

The wreck was found shortly after the accident by using an aircraft mounted magnetometer. The finding was confirmed with a side-scan sonar and then with a robot submersible. The ship had a broken back, lying to two pieces on the lake bottom, about 530 feet down. The aft of the ship was upside down and about 170' from the forward section. The forward part of the wreck sits roughly upright.

Wreck Map (Image: Wikipedia)

Wreck Map (Image: Wikipedia)

There are several theories about how the Fitz sank, but none can be confirmed. The obvious theory is that the waves were too much for the boat, but there are likely more complicating factors. A set of rogue waves about 35 feet in height had struck the Anderson and were headed in the direction of the Fitzgerald. It is possible that these waves, which had just buried the aft cabins of the Anderson, were too much for the listing Fitzgerald and that it was submerged to never resurface. Another popular theory was that the cargo hold had flooded. The coast guard favors this theory as evidence at the site suggests that not all of the hold hatch clamps were fastened. The holds gradually flooded until the ship could no longer recover from a wave strike. While this helps explain the list, it doesn't fit with the long safety record of the hatch closures used on the boat. The NTSB favored a theory that the hatch covers collapsed under a very large and heavy boarding sea, the holds instantly filled, and the boat sank. That would explain the sudden disappearance - I'd think that if a slow flood was occurring, an experienced captain (which McSorley was) would have sent a distress signal. Some have even proposed that loose debris in the water or from the ship itself caused significant damage to the topside. Yet another theory was that the ship had raked a reef earlier in the day since it's navigational aides were out. The puncture slowly let water in and eventually it was too much to stay afloat. This theory has been mostly ruled out by the lack of evidence on the exposed keel of the ship and by no marks on the reef (surveyed shortly after the wreck). The final theory is simple structural failure. The hull was a new design that was welded instead of riveted. Former crew members and boat keepers said that the ship was very "bendy" in heavy seas and that paint would even crack due to the large strains.

This mechanical failure theory seems most likely to me. Repeated flexing and/or riding up on a set of large waves allowed the hull to fail and immediately plunge to the bottom. The hull was under excess strain due to water loading from topside damage. That water loading is what caused the list. While some say the proximity of the sections mean that it sank whole, I disagree. The ship was 729 feet long and sank in 503 feet of water! If it had plunged bow first, hit the bottom, then broke there should be more evidence on the bow that just isn't there. Think ships don't bend in heavy seas? Have a look at the video below of a cargo ship passageway. Note about 11 seconds in when the ship hits a 7-8 meter wave. That's much smaller that the rouge wave that may have hit the Fitz.

If you'd like to know more about the accident and see some interviews with experts, have a look at the Discovery channel documentary below. We will probably never know exactly how the Fitzgerald sank, but it's a very interesting and cautionary tale. It also reminds us of the incredible power of water and wind against practically anything that we can manufacture.

Setting up a Lab Thermal Chamber

chamber_dark

I've been working on developing some geophysical instruments that will need some significant temperature compensation. Often times when you buy a sensor there is some temperature dependance (if not humidity, pressure, and a slew of other variables). The manufacturer will generally quote a compensation figure. Say we are measuring voltage with an analog-to-digital converter (ADC); the temperature dependance may be quoted as some number of volts per degree of temperature change over a certain range of voltages and temperatures. Generally this is a linear correction. Most of the time that is good enough, but for scientific applications we sometimes need to squeeze out every error we can and compare instruments. Maybe one sensor is sightly more temperature dependent than another; comparing the sensors could then lead us to some false conclusions. This means that sometimes we need to calibrate every sensor we are going to use. In the lab I work in, we calibrate all of our transducers every 6 months by using transfer standards. (Standards, transfer of standards, and calibration theory are a whole series of posts in themselves.)

To do thermal calibrations it is common to put the instruments into a thermal chamber in which we can vary the temperature over a wide range of conditions while keeping the physical variable we are measuring (voltage, pressure, load, etc) constant. Then we know any change in the reading is due to thermal effects on the system. If we are measuring something like tilt or displacement, we have to be sure that we are calibrating the electronics, not signals from thermal expansion of metals and materials that make up our testing jig.

I scoured EBay and the surplus store at our University, but only found very large and expensive units. I remembered that several years ago Dave Jones over at the EEVBlog had mentioned a cheap alternative made from a peltier device wine cooler. I dug up his video (below) and went to the web again in search of the device.

I found the chamber marketed as a reptile egg incubator on Amazon. The reviews were not great, some saying the unit was off by several degrees or did not maintain the +/- 1 degree temperature as marketed. I decided to give it a shot since it was the only affordable alternative and if it didn't work, maybe I could hack it with a new control system and use the box/peltier element with my own system. In this post I'm going to show you the stock performance of the chamber and some initial tests to figure out if it will do the job.

As soon as it arrived I setup the unit and put an environmental sensor in (my WxBackpack for the Light Blue Bean used back in the drone post) inside. I wanted to see if it was even close to the temperature displayed on the front and how good the control was with no thermal load inside. There was a small data drop-out causing a kink early in the record (around 30 C). It looks like the temperature is right on what I had set it to with the quoted +/- 1 degree range. There is some stabilization time and the mean isn't the same as the set point, but that makes sense to me, you don't want to overheat eggs! This looks encouraging overall. I also noticed that the LED light inside the chamber flickered wildly when the peltier device was drawing a lot of power heating/cooling the system. I then opened the door and set the unit to cool. After reaching room temperature, I closed the door and went to bed. It certainly isn't fast, but I was able to get down to about 2C with no thermal load. That was good enough for me. Time to add a cable port, checkout the LED issue, and test with some water jars for more thermal mass.

Initial test of the thermal chamber with nothing inside except a temperature logger. Set point shown by dashed line.

Initial test of the thermal chamber with nothing inside except a temperature logger. Set point shown by dashed line.

The next step was to add a cable port to be able to get test cables in and out. I decided to follow what Dave did and add a 1.5" test port with a PVC fitting, a hole saw, and some silicone sealant. Below are a few pictures of drilling and inserting the fitting. I used Home Depot parts (listing below). I didn't have the correct size hole-saw. That's happened a lot lately, so I invested in the Milwaukee interchangeable system. I got a threaded fitting so I can put a plug in if needed. the time honored tradition is to put your cables through the port and stuff a rag in though. This works as well as a plug generally, but it's nice to have the option.

Screenshot 2015-10-31 14.44.16

Before, during, and after cable port placement. The center of the hole is 7 3/8" back from the front do

Before, during, and after cable port placement. The center of the hole is 7 3/8" back from the front door seal, and 5 1/8" up from table top level. I used gel super glue to quickly fix the fitting to the plastic layers and foam. After that dried, I used silicone bath adhesive/sealant to seal the inside and outside. The edge of a junk-mail credit card offer made smoothing the silicone easier.

While working inside the chamber I pulled out the LED board and noticed a dodgy looking solder joint. I reflowed it. I also pulled the back off the unit to make sure there were no dangerous connections or anything that looked poor quality. Nothing jumped out.

I put the whole thing back together and put a sensor in to monitor the environment and tested again. This time I tried a few different set points with and without containers of water inside the chamber. First with nothing but the sensor setup inside:

Fo

For both heating and cooling the performance under no thermal load (other than the sensor electronics) was pretty good. Cooling is rather slow and more poorly controlled than heating though.

Next I put sealed containers of water on the shelves of the chamber to add some thermal mass and see if that changed the characteristics of the chamber any. It did slow the temperature change as expected, but appears to have had little other effect (I didn't wait long enough for stabilization on some settings).

test_waterload

With a water load the chamber had similar performance, but was slower in getting to temperature as expected.

It looks like at temperatures above ambient the chamber has a stability of +/- 1 degree. Below ambient it becomes a couple of degrees. The absolute reading drifts a bit too. Setting the chamber to a given reading always resulted in stabilization within about a degree of the setting though.

I think this will be a nice addition to my home lab. While the unit isn't incredibly accurate, I will be recording the device temperature anyway, so that works for me. It'd be nice to cool down more quickly though, so I may facilitate that with some dry ice. Stay tuned as I'll be testing instruments in there sometime in the next month or so.

P.S. - The LED light still flickers in a way that indicates unstable power/connection. Not a deal breaker for me since I don't really need the light, but something to remember.