Tracking Earthquakes Across the Globe - Travel Times

A few days ago I had the Epicentral+ app running on my iPad sitting on my desk and saw an event come on the screen. By looking at what stations were measuring ground movement first, second, third, etc. I could make a good guess at the event's location. Did you know that, with the data from a single seismic station, you can begin to guess the epicenter?

Generally earthquake locations are performed using many stations and algorithms that have been tweaked for years as we want to get ever more accurate locations. The USGS does this location for many events every day. It's fun to keep a live feed of the global seismic data up and look at the patterns. This is possible thanks to applications like "Earth Motion Monitor" and "Epicentral+", both products of Prof. Charles Ammon. They are worth installing and having a look. Prof. Ammon has seen the value in being able to watch signals for long periods of time: you begin to pick out patterns and get an intuitive feel for the response seen due to different events. While I don't have nearly the amount of insight possessed by experienced seismologists, I wanted to show you a quick and simple way to figure out about how far an event was from a given station. If you combine that with some geologic knowledge of where plate boundaries are, you can likely narrow down the region and earthquake type before anything comes out online.

The event I saw is a pretty small event, a magnitude 5.8 near the Fiji islands; it'll work for our purposes and not provide too much distraction. I've marked it with a white star on the map below (a Google Earth map with the USGS plate boundary file). This event occurred near the North New Hebrides trench, part of a slightly complex zone where the Australian plate is being pushed under, or subducted, beneath the Pacific plate.

Our earthquake in question marked with a white star near the North New Hebrides trench.

Our earthquake in question marked with a white star near the North New Hebrides trench.

Though the event was not huge, it was detected by many seismometers around the globe. In fact, there is a handy map of the stations with adequate signal automatically generated by IRIS. The contour lines on the map show distance from the event in degrees (more on that later).

Stations and their distance from the earthquake. (Image: IRIS)

Stations and their distance from the earthquake. (Image: IRIS)

I saved an image of assorted global seismic stations about an hour after the event occurred. You can see energy from the earthquake recorded on all stations, with some really nice large packets of surface waves (the largest waves on the plot).

raw_seismograms

We're actually interested in the first two signals though, the classic P and S waves. Let's take a closer look at the station in Pohakuloa, Hawaii. We can see the first arrival, the P-wave, then a few minutes later the S-wave. The P-wave (a compressional, basically a sound-wave) travels faster than the transverse S-wave, so they arrive at different times. We know the wave speeds with depth in the Earth, so by using the difference in time between these arrivals, we can come up with a rough distance to the event.

seismograms_annotated

A graph of distance vs. arrival times can tell us the whole story. I've made a simple version in which you can find the time we measured (7.25 minutes) on the x-axis, then lookup the distance on the y-axis. If we do this (marked in dashed black lines), we see that the distance should be about 50 degrees.

SmP_time_curve

That's not bad! I calculated the actual distance knowing the earthquake location and station location to be 49.6 degrees. The theoretical difference in travel time based on a simple Earth model is 7.14 minutes. The slight error is due to a complex real Earth, but mostly due to me picking a rough time on an iPad screen without really zooming in on the plot. The goal was to know about how far away the earthquake was from the station though, and we did that with no problem. Just from that information it was easy to tell that the event was in the Fiji region.

Distance is in degrees, which may seem a little strange. Since the Earth is a ball-like blob, defining distances across the surface is a little tricky when distances get large. It turns out to be more convenient to think of this distance as an angle made with the center of the earth. Take a look at the screenshot below. It's from a program called taup and shows the actual paths taken by the P and S waves through a cut-away of the Earth. I've marked the angle I'm talking about with the greek letter ∆. (We would formally say that this is the great circle arc distance in degrees. If you want to learn more about great circle arcs, you should checkout our two part podcast on map projections.)

taup_path_annotated

As scientists, we often look at a travel time plot a little differently. There are many different waves or "phases" that we are interested in, so plotting one line of  S-P wave arrival is rather limiting. Instead we plot a classic "travel time curve" where the arrival time after the event is plotted as a function of distance. I've reproduced one below (table of data plotted from C. Ammon).

travel_time_curve

We can make a plot like this from data too! Taking many stations, plotting them as a function of distance we get a plot like the one below. You can see curved and straight lines if you stand back and squint a little. Those are arrivals of different phases across the globe! Notice the lower curved line that matches the P-wave travel time above.

Notice the lines and curves made as different phases from the earthquake arrive across the globe. (Image: IRIS)

Notice the lines and curves made as different phases from the earthquake arrive across the globe. (Image: IRIS)

Like I mentioned, there are many different phases we can look at. To give you an idea of things a seismologist would look for, there is a version of the plot with a lot of the more complex phases marked on it below. I know it looks intimidating, but for this event, you'll see we really can't easily discern a lot of the phases. That's because this really isn't a huge event, but it's nice for us because that means the plot is easier to look at.

Arrival plot with phases marked. (Image: IRIS)

Arrival plot with phases marked. (Image: IRIS)

So there you have it, by remembering the rough travel time curves or posting one on your wall, you can quickly determine the approximate region an earthquake occurred in just by glancing at the seismograms!

Magnitude 7.1 Alaska Earthquake Visualizations

This morning there was a magnitude 7.1 earthquake beneath Alaska. Alaska is no stranger to earthquakes, and I'm not going to talk about the tectonics, but I wanted to share the ground motion videos I produced for the event. Also be sure to checkout the ground motion videos over at IRIS as well. At present no major damage or injury was reported. Though CNN did sensationalize the earthquake (as they always do):

Screenshot 2016-01-24 07.20.30

First a video from a nearby station, Homer, AK. About 8 mm maximum ground displacement with some pretty large ground accelerations.

The earthquake recorded in Australia. Not as exciting, but notice the packets of waves towards the end of the video, these are the surface waves that took the longer route around the globe compared to their earlier counter parts. (Called R1/R2 and G1/G2.)

Here's a central US station near where I grew up. Nice surface waves and a good example of what looks like the PcP phase (P-wave reflected off the outer core of the planet.) The PcP phase is at about 604 seconds, around 100 seconds after the P wave. In the figure below the movie, the approximate PcP path is red, the P path is black. Pretty neat!

Screenshot 2016-01-24 13.32.59

 

Squeezing Rocks with your Bare Hands

Our lab group. Photo: Chris Marone

Our lab demo group. Photo: Chris Marone

As frequent readers of the blog or listeners of the podcast will know, I really like doing outreach activities. It's one thing to do meaningful science, but another entirely to be able to share that science with the people that paid for it (taxpayers generally) and show them why what we do matters. Outreach is also a great way to get young people interested in STEAM (Science, Technology, Engineering, Art, Math). When anyone you are talking to, adult or child, gets a concept that they never understood before, the lightbulb going on is obvious and very rewarding.

Our lab group recently participated in two outreach events. I've shared about the demonstrations we commonly use before when talking about a local science fair. There are a few that probably deserve their own videos or posts, but I wanted to share one in particular that I improved upon greatly this year: Squeezing Rocks.

Awhile back I shared a video that explained how rocks are like springs. The normal demonstration we used was a granite block with strain gauges on it and a strip chart recorder... yes... with paper and pen. I thought showing lab visitors such an old piece of technology was a bit ironic after they had just heard about our lab being one of the most advanced in the world. Indeed when I started the paper feed, a few parents would chuckle at recognizing the equipment from decades ago. For the video I made an on-screen chart recorder with an Arduino. That was better, but I felt there had to be a better way yet. Young children didn't really understand graphs or time series yet. Other than making the line wiggle, they didn't really get the idea that it represented the rock deforming as they stepped on it or squeezed it.

I decided to go semi old-school with a giant analog meter to show how much the rock was deformed. I wanted to avoid a lot of analog electronics as they always get finicky to setup, so I elected to go with the solution on a chip route with a micro-controller and the HX711 load cell amplifier/digitizer. For the giant meter, I didn't think building an actual meter movement was very practical, but a servo and plexiglass setup should work.

A very early test of the meters shows it's 3D printed servo holder inside and the electronics trailing behind.

A very early test of the meters shows it's 3D printed servo holder inside and the electronics trailing behind.

Another thing I wanted to change was the rock we use for the demo. The large granite bar you stepped on was bulky and hard to transport. I also though squeezing with your hands would add to the effect. We had a small cube of granite about 2" on a side cut with a  water jet, then ground smooth. The machine shop milled out a 1/4" deep recess where I could epoxy the strain gauges.

Placing strain gauges under a magnifier with tweezers and epoxy.

Placing strain gauges under a magnifier with tweezers and epoxy.

Going into step-by-step build instructions is something I'm working on over at the project's Hack-a-Day page. I'm also getting the code and drawings together in a GitHub repository (slowly since it is job application time). Currently the instructions are lacking somewhat, but stay tuned. Checkout the video of the final product working below:

The demo was a great success. We debuted it at the AGU Exploration Station event. Penn State even wrote up a nice little article about our group. Parents and kids were amazed that they could deform the rock, and even more amazed when I told them that full scale on the meter was about 0.5µm of deformation. In other words they had compressed the rock about 1/40 the width of a single human hair.

A few lessons came out of this. Shipping an acrylic box is a bad idea. The meter was cracked on the side in return shipping. The damage is reparable, but I'm going to build a smaller (~12-18") unit with a wood frame and back and acrylic for the front panel. I also had a problem with parts breaking off the PCB in shipment. I wanted the electronics exposed for people to see, but maybe a clear case is best instead of open. I may try open one more time with a better case on it for transport. The final lesson was just how hard on equipment young kids can be. We had some enthusiastic rock squeezers, and by the end of the day the insulation on the wires to the rock was starting to crack. I'm still not sure what the best way to deal with this is, but I'm going to try a jacketed cable for starters.

Keep an eye on the project page for updates and if any big changes are made, you'll see them here on the blog as well. I'm still thinking of ways to improve this demo and a few others, but this was a giant step forward. Kids seeing a big "Rock Squeeze O Meter" was a real attention getter.

Hmm... As I'm writing this I'm thinking about a giant LED bar graph. It's easy to transport and kind of like those test your strength games at the fair... I think I better go parts shopping.

Mill Time - Back in the Shop

While home for the holidays, I decided to make a little calibration stand that I need for a tilt meter project I'm working on. Back in the 2006 time frame I had worked to learn basic machining skills on the mill and lathe. I never was amazing at it, but managed to get a basic skill set down. I ended up back over at my mentor's shop this week to make a simple part, but thought you may enjoy seeing some photos of a simple milling setup.

The first step is to have a part design that is exactly what you want to make. Problems always arise when you have a rough sketch and make it up as you go. For some hobby projects that can work, but as our systems become more and more complex, it generally just leads to wasted time, material, and lots of frustration. This particular part is exceedingly simple, but I went ahead and made a full 3D CAD model anyway, just to illustrate the process.

Our goal is to make a flat plate for a tilt meter to set on. We will then elevate one end of the plate a known amount with precision thickness pieces of metal called gauge blocks. Knowing the distance between the ends of the plate and the amount we elevate one end, we can very accurately calculate the angle. That lets me calibrate the readings from the tilt meter to real physical units of degrees or radians. All good designs start with a specification, my specification was I wanted at least 5 different tilts ranging from 0 - 0.5 degrees, the more combinations possible the better. I also wanted a compact and rigid device that wouldn't bend, warp, or otherwise become less accurate when tilted.

Time to fire up a Jupyter notebook and do some calculations! I mainly wanted to be able to play with the tradeoffs of baseline length, height of gauge block (they come in standard sizes), etc. After playing with the numbers some, I came with up a design that used multiple baseline lengths with available gauge blocks. I decided to use ball bearings under the plate to give nice point contacts with the surface of the table as well. This meant I needed a plate about 6" x 12" with hemispherical divots to retain the bearings.

Next, I fired up FreeCAD and made the design by taking a 6" x 6" plate and using 0.5" spheres as the cutting shape to make the divots. The divots are only 1/8" deep, so setting them in 1/4" from the edges is enough. Then I just mirrored that 6" x 6" part to make the full part. This lets me tilt both directions the same amount without turning or moving the instrument under test. The drawing I produced is shown in both bottom and oblique view.

bottom oblique

Next it was time to make the plate. I ended up with a piece of 0.5" thick 6061 Aluminum plate. We first cut it to roughly the size we wanted (slightly oversized) with a bandsaw. Then the plate was clamped down to the milling machine table to take off the extra material with a milling bit and give the sides a nice and clean finish. We ended up re-clamping during the work (almost always a bad idea) and had a slight taper on the width, but that isn't a concern for the usefulness. (By slight taper I mean about 20 thou along the length.)

We then were ready to make the divots. To do this we used a ball end mill that makes nice hemispheres. This is a very simple part, so just finding the edge, setting the readout, and doing the cuts took about 20 minutes. I've included some photos incase you haven't seen a milling setup before. It's really great fun to be able to control these cutters and tools to a thousandth of an inch and sculpt metal into what you need. As I said, this isn't a complex part, but that's good because I was a little rusty!

2015-12-30 10.15.19 2015-12-30 10.51.59

In the end we got a nice plate and I think it will perform its duty very well. I'll most likely write a future post showing it in use and explaining instrument calibration. I've included some pictures of the finished plate and how it will work sitting on the ball bearings.

2015-12-30 15.45.32

2015-12-30 15.46.50

Until next time, have a happy and safe new year!

3D Filament and Humidity - Why My Prints Keep Failing

Awhile back I talked about some weird issues with my 3D printer filament being damaged by UV radiation from the sun. I'm back with more stories of 3D printing though and my current attempt at solving the issue.

I was printing some parts and kept having issues with the layers coming apart and/or having a bubbly, uneven surface texture. I generally print with ABS plastic, even though others seem to have more issues with it, I've always had better luck than with PLA. I decided to try some PLA and also had problems with it sticking and with the filament becoming very brittle and shattering. This problem was slowly driving me crazy as I usually can get high quality prints with little fuss.

First off I moved the printer further away from the window to be sure no hot/cold convective air currents were interrupting the printing process. I even hung some cardboard sheets around the side of the print area. If I had the space I'd make a full enclosure for the printer to cut off all air currents from the room, but that will have to wait for awhile. (It would also dampen the noise, which is a bonus in an apartment!) I still was getting "bubbly" prints though.

Cardboard baffles taped onto the printer in an effort to reduce air currents near the print surface.

Cardboard baffles taped onto the printer in an effort to reduce air currents near the print surface.

After reading more online I decided that my filament must be too moist. The plastic is adsorbing moisture from the humid air and that turns to steam in the print head, causing little blow-outs and my bubbly texture. After consulting with a colleague that does a lot of printing, he confirmed that this is an issue and even cited his tests showing that filament over a few weeks old produced weaker prints. There are a few ways I can think of to help with the issue: 1) put filament in a bucket with a light bulb as a heater to keep the humidity low, 2) keep the filament in vacuum packs, 3) lock it in a low humidity environment with silica gel beads. Based on cost and convenience, I ended up going with the third option. While this technique won't give filament an infinite life, I was hoping to salvage some of mine.

I went to a craft store and bought a plastic tub that had a soft air/water tight seal; specifically the Ziploc Weathertight series container. I also ordered a gallon container of silica beads that are commonly used to keep products dry during shipping. While the products were on their way, I collected a bunch of plastic containers and drilled many small holes in them. When the beads arrived I filled the containers with them and placed them and my filament in the large box.

2015-11-01 20.30.35

In an effort to see how good of a job the silica beads were doing, I also taped a humidity indicator inside the box. I hadn't used these simple indicators before and had no idea how accurate they were, so I whipped up a quick sensor with a MicroView (Arduino) and checked it. To my surprise, it was dead on, even when exposed to the higher room humidity. If you only need 5-10% accuracy (like when seeing if the silica beads need to be baked because they are saturated) these seem to do the trick.

A close-up of the microview showing 17% RH inside my container.

A close-up of the microview showing 17% RH inside my container.

The humidity indicator also shows below 20%, matching the electronic sensor.

The humidity indicator also shows below 20%, matching the electronic sensor.

Once I verified that this solution might work, I put the rest of the filament and anything else I wanted to stay dry in the tub. Still lots of room left for future filament purchases, unpainted parts, and all of the surface mount sensors that need to be stored in a dry environment.

2015-11-01 20.30.28

After letting the filament sit in the box for a few days, I tried another print. To my surprise, there were no more blow-outs! I still have a problem with part of my print bed not adhering very well, but that's another story and another, currently only partially solved, mystery. For now, this box solution seems to have part of my 3D printing problems solved. I have noticed that old filament does produce weaker prints, so I'm going to start stocking less filament and print most things in a single color (probably just black and white unless a special need arises).

40 Years Ago - The Wreck of the Edmund Fitzgerald

SS Edmund Fitzgerald (Image Wikipedia)

SS Edmund Fitzgerald (Image Wikipedia)

While I normally post technical bits or project reports, occasionally we examine a historical event that happened "on this day" and see what is known about it. Today marks 40 years since the sinking of the freighter the SS Edmund Fitzgerald. The ship disappeared November 10, 1975 with no distress call in the middle of a severe storm on the great lakes. The prescribed route took her from Superior, Wisconsin, across lake Superior, and towards Detroit with a load of ore pellets.  In this post I'll quickly give a synopsis of the storm and conditions of the sinking. A lot of research, videos, articles, and more have bene done on this event, so rather than re-write it all, we'll give a sense of the general circumstances and conjecture some on the theories of what happened.

After leaving port, the Fitz was joined by another freighter, the Arthur M. Anderson, that looks nearly identical. The ships were not far apart for much of their journey and were within about 10 miles of each other at the time of last contact. While 10 miles is rather close, it was a huge distance in the storm that was coming upon them. It was a classic winter gale that had been expected to track further south. Winds rose, snow blinded the pilots, and the waves continuously grew from a few feet to rouge wave events of about 35 feet. Storms like this have frequently claimed ships on the lake, with the Whitefish Point area (where the Fitz sank) containing over 240 ships. The lake alone has over 7,000 wrecks and 30,000 lives claimed.

Winds and/or equipment problems rendered the radar on the Fitz inoperable about 4:10 PM. Captain McSorley slowed down to close the range with the Anderson to get radar guidance. Later in the evening, the captain reported that the waves were high enough that the ship was taking significant seas on deck. The ship had also developed a bad list (i.e. was leaning to one side). At 7:10 PM the Anderson called McSorley to ask how the ship was faring, he said "We are holding our own." Minutes later the Fitz disappeared from the Anderson's radar screen and was gone without a single distress signal. All 29 on board perished and none were recovered. This probably wound't be anything but "another shipwreck" to the general public if it hadn't been for the Canadian songwriter Gordon Lightfoot. After reading about the accident, he wrote a ballad that described what it might have been like to be on the boat. The song (below) hit number 2 on the charts in 1976 and immortalized the Fitz and her crew.

The wreck was found shortly after the accident by using an aircraft mounted magnetometer. The finding was confirmed with a side-scan sonar and then with a robot submersible. The ship had a broken back, lying to two pieces on the lake bottom, about 530 feet down. The aft of the ship was upside down and about 170' from the forward section. The forward part of the wreck sits roughly upright.

Wreck Map (Image: Wikipedia)

Wreck Map (Image: Wikipedia)

There are several theories about how the Fitz sank, but none can be confirmed. The obvious theory is that the waves were too much for the boat, but there are likely more complicating factors. A set of rogue waves about 35 feet in height had struck the Anderson and were headed in the direction of the Fitzgerald. It is possible that these waves, which had just buried the aft cabins of the Anderson, were too much for the listing Fitzgerald and that it was submerged to never resurface. Another popular theory was that the cargo hold had flooded. The coast guard favors this theory as evidence at the site suggests that not all of the hold hatch clamps were fastened. The holds gradually flooded until the ship could no longer recover from a wave strike. While this helps explain the list, it doesn't fit with the long safety record of the hatch closures used on the boat. The NTSB favored a theory that the hatch covers collapsed under a very large and heavy boarding sea, the holds instantly filled, and the boat sank. That would explain the sudden disappearance - I'd think that if a slow flood was occurring, an experienced captain (which McSorley was) would have sent a distress signal. Some have even proposed that loose debris in the water or from the ship itself caused significant damage to the topside. Yet another theory was that the ship had raked a reef earlier in the day since it's navigational aides were out. The puncture slowly let water in and eventually it was too much to stay afloat. This theory has been mostly ruled out by the lack of evidence on the exposed keel of the ship and by no marks on the reef (surveyed shortly after the wreck). The final theory is simple structural failure. The hull was a new design that was welded instead of riveted. Former crew members and boat keepers said that the ship was very "bendy" in heavy seas and that paint would even crack due to the large strains.

This mechanical failure theory seems most likely to me. Repeated flexing and/or riding up on a set of large waves allowed the hull to fail and immediately plunge to the bottom. The hull was under excess strain due to water loading from topside damage. That water loading is what caused the list. While some say the proximity of the sections mean that it sank whole, I disagree. The ship was 729 feet long and sank in 503 feet of water! If it had plunged bow first, hit the bottom, then broke there should be more evidence on the bow that just isn't there. Think ships don't bend in heavy seas? Have a look at the video below of a cargo ship passageway. Note about 11 seconds in when the ship hits a 7-8 meter wave. That's much smaller that the rouge wave that may have hit the Fitz.

If you'd like to know more about the accident and see some interviews with experts, have a look at the Discovery channel documentary below. We will probably never know exactly how the Fitzgerald sank, but it's a very interesting and cautionary tale. It also reminds us of the incredible power of water and wind against practically anything that we can manufacture.

Setting up a Lab Thermal Chamber

chamber_dark

I've been working on developing some geophysical instruments that will need some significant temperature compensation. Often times when you buy a sensor there is some temperature dependance (if not humidity, pressure, and a slew of other variables). The manufacturer will generally quote a compensation figure. Say we are measuring voltage with an analog-to-digital converter (ADC); the temperature dependance may be quoted as some number of volts per degree of temperature change over a certain range of voltages and temperatures. Generally this is a linear correction. Most of the time that is good enough, but for scientific applications we sometimes need to squeeze out every error we can and compare instruments. Maybe one sensor is sightly more temperature dependent than another; comparing the sensors could then lead us to some false conclusions. This means that sometimes we need to calibrate every sensor we are going to use. In the lab I work in, we calibrate all of our transducers every 6 months by using transfer standards. (Standards, transfer of standards, and calibration theory are a whole series of posts in themselves.)

To do thermal calibrations it is common to put the instruments into a thermal chamber in which we can vary the temperature over a wide range of conditions while keeping the physical variable we are measuring (voltage, pressure, load, etc) constant. Then we know any change in the reading is due to thermal effects on the system. If we are measuring something like tilt or displacement, we have to be sure that we are calibrating the electronics, not signals from thermal expansion of metals and materials that make up our testing jig.

I scoured EBay and the surplus store at our University, but only found very large and expensive units. I remembered that several years ago Dave Jones over at the EEVBlog had mentioned a cheap alternative made from a peltier device wine cooler. I dug up his video (below) and went to the web again in search of the device.

I found the chamber marketed as a reptile egg incubator on Amazon. The reviews were not great, some saying the unit was off by several degrees or did not maintain the +/- 1 degree temperature as marketed. I decided to give it a shot since it was the only affordable alternative and if it didn't work, maybe I could hack it with a new control system and use the box/peltier element with my own system. In this post I'm going to show you the stock performance of the chamber and some initial tests to figure out if it will do the job.

As soon as it arrived I setup the unit and put an environmental sensor in (my WxBackpack for the Light Blue Bean used back in the drone post) inside. I wanted to see if it was even close to the temperature displayed on the front and how good the control was with no thermal load inside. There was a small data drop-out causing a kink early in the record (around 30 C). It looks like the temperature is right on what I had set it to with the quoted +/- 1 degree range. There is some stabilization time and the mean isn't the same as the set point, but that makes sense to me, you don't want to overheat eggs! This looks encouraging overall. I also noticed that the LED light inside the chamber flickered wildly when the peltier device was drawing a lot of power heating/cooling the system. I then opened the door and set the unit to cool. After reaching room temperature, I closed the door and went to bed. It certainly isn't fast, but I was able to get down to about 2C with no thermal load. That was good enough for me. Time to add a cable port, checkout the LED issue, and test with some water jars for more thermal mass.

Initial test of the thermal chamber with nothing inside except a temperature logger. Set point shown by dashed line.

Initial test of the thermal chamber with nothing inside except a temperature logger. Set point shown by dashed line.

The next step was to add a cable port to be able to get test cables in and out. I decided to follow what Dave did and add a 1.5" test port with a PVC fitting, a hole saw, and some silicone sealant. Below are a few pictures of drilling and inserting the fitting. I used Home Depot parts (listing below). I didn't have the correct size hole-saw. That's happened a lot lately, so I invested in the Milwaukee interchangeable system. I got a threaded fitting so I can put a plug in if needed. the time honored tradition is to put your cables through the port and stuff a rag in though. This works as well as a plug generally, but it's nice to have the option.

Screenshot 2015-10-31 14.44.16

Before, during, and after cable port placement. The center of the hole is 7 3/8" back from the front do

Before, during, and after cable port placement. The center of the hole is 7 3/8" back from the front door seal, and 5 1/8" up from table top level. I used gel super glue to quickly fix the fitting to the plastic layers and foam. After that dried, I used silicone bath adhesive/sealant to seal the inside and outside. The edge of a junk-mail credit card offer made smoothing the silicone easier.

While working inside the chamber I pulled out the LED board and noticed a dodgy looking solder joint. I reflowed it. I also pulled the back off the unit to make sure there were no dangerous connections or anything that looked poor quality. Nothing jumped out.

I put the whole thing back together and put a sensor in to monitor the environment and tested again. This time I tried a few different set points with and without containers of water inside the chamber. First with nothing but the sensor setup inside:

Fo

For both heating and cooling the performance under no thermal load (other than the sensor electronics) was pretty good. Cooling is rather slow and more poorly controlled than heating though.

Next I put sealed containers of water on the shelves of the chamber to add some thermal mass and see if that changed the characteristics of the chamber any. It did slow the temperature change as expected, but appears to have had little other effect (I didn't wait long enough for stabilization on some settings).

test_waterload

With a water load the chamber had similar performance, but was slower in getting to temperature as expected.

It looks like at temperatures above ambient the chamber has a stability of +/- 1 degree. Below ambient it becomes a couple of degrees. The absolute reading drifts a bit too. Setting the chamber to a given reading always resulted in stabilization within about a degree of the setting though.

I think this will be a nice addition to my home lab. While the unit isn't incredibly accurate, I will be recording the device temperature anyway, so that works for me. It'd be nice to cool down more quickly though, so I may facilitate that with some dry ice. Stay tuned as I'll be testing instruments in there sometime in the next month or so.

P.S. - The LED light still flickers in a way that indicates unstable power/connection. Not a deal breaker for me since I don't really need the light, but something to remember.

Debugging - Book Review

51babHMIFWL

I end up doing a lot of debugging, in fact every single day I'm debugging something. Some days it is software and scripts that I'm using for my PhD research, some days it is failed laboratory equipment, and some days it's working the problems out of a new instrument design. Growing up working on mechanical things really helped me develop a knack for isolating problems, but this is not knowledge that everyone has the occasion to develop. I'm always looking for ways to help people learn good debugging techniques. There's nothing like discovering, tracking down, and fixing a bug in something. Also, the more good debuggers there are in the world, the fewer hours are waisted fruitlessly guessing at problems.

I'd heard about a debugging book that was supposed to be good for any level of debugger, from engineer to manager to homeowner. I was a little suspicious since the is such a wide audience, but I found that my library had the book and checked it out; it is "Debugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems" by David Agans. The book sat on my shelf for a few months while I was "too busy" to read it. Finally, I decided to tackle a chapter a day. The chapters are short and I can handle two weeks of following the book. Each morning when I got to work, I read the chapter for that day. One weekend I read an extra chapter or two because they were enjoyable.

I'm not going to ruin the book, but I am going to tell you the 9 rules (it's okay, they are also available in poster form on the book website).

  1. Understand the System
  2. Make It Fail
  3. Quit Thinking and Look
  4. Divide and Conquer
  5. Change One Thing at a Time
  6. Keep an Audit Trail
  7. Check the Plug
  8. Get a Fresh View
  9. If You Didn't Fix It, It Ain't Fixed

They seem simple, but think of the times you've tried to fix something and skipped around because you thought you knew better to have it come back to sting you. If you've done a lot of debugging, you can already see the value of this book.

The book contains a lot of "war stories" that tell tales of when a rule or several rules were the key in a debugging problem. My personal favorite was the story about a video conferencing system that seemed to randomly crash. Turns out the compression of the video had problems with certain patterns and when the author wore a plaid shirt to work and would test the system, it failed. He ended up sending photocopies of his shirt to the manufacturer of the chip. Fun stories like that made the book fun to read and show how you have to pay attention to everything when debugging.

The book has a slight hardware leaning, but has examples of software, hardware, and home appliances. I think that all experimentalists or engineers should read this early on in their education. It'll save hours of time and make you seem like a bug whisperer. Managers can learn from this too and see the need to provide proper time, tools, and support to engineering.

If you like the blog, you'll probably like this book or know someone that needs it for Christmas. I am not being paid to write this, I don't know the author or publisher, but wanted to share this find with the blog audience. Enjoy and leave any comments about resources or your own debugging issues!

Using Visual Mics in Geoscience

Image: TED Talk

Image: TED Talk

Last time I wrote up the basics of a tip sent in by Evan over at Agile Geoscience. This technology is very neat, if you haven't read that post first, please do and watch the TED talk. This post is going to be about how we could apply this to problems in geoscience. Some of these ideas are "low hanging fruit" that could be relatively easy to accomplish, others are in need of a few more PhD students to flesh them out. I'd love to work on it myself, but I keep hearing about this thing called graduation and think it sounds like a grand time. Maybe after graduation I can play with some of these in detail, maybe before I can just experiment around a bit.

In his email to me, Evan pointed out that this visual microphone work IS seismology of sorts. In seismology we look at the motion of the Earth with seismometers or geophones. If we have a lot of them and can look at the motion of the Earth in a lot of places over time, we can learn a lot about what it's like inside the Earth. This type of survey has been used to understand problems as big as the structure of the Earth and as small as finding artifacts or oil in shallow deposits. In (very) general terms we look at very low frequency waves for Earth structure problems with periods of a second to a few hundred seconds. For more near surface problems we may look at signals up to a few hundred cycles per second (Hz). Remember in the last post I said that we collect audio data at around 44,200 Hz? That's because as humans we are able to hear up to around 20,000 Hz. All of this is a lot higher frequency than we ever use in geoscience... I'm thinking that makes this technique somewhat easier to apply and maybe even able to use poor quality images.

So what could it be used for? Below are a few bullet points  of ideas. Please add to them in the comments or tear them apart. I agree with Evan that there is some great potential here.

  • Find/visualize/simulate stress and strain concentration in heterogeneous materials.
  • Extract modulus of rock from video of compression tests. Could be as simple as stepping on the rock.
  • Extend the model to add predicted failure and show expected strain right before failure.
  • Look at a sample from multiple camera views and combine for the full anisotropic properties. This smells of some modification of structure from motion techniques.
  • Characterize complicated machines stiffness/strain to correct for it when reducing experimental data without complex models for the machine.
  • Try prediction of building response during shaking.
  • What about perturbing bodies of water and modeling the wave-field?

With everything in science, engineering, and life, there are tradeoffs. What are the catches here? Well, the resolution is pretty good, but may not be good enough for the small differences in properties we sometimes deal with. In translating this over to work on seismic data I think a lot of algorithm changes would have to happen that may end up making it about the same utility as our first-principles approaches. A big limitation for earthquake science is what happens at large strains. The model looks at small strains/vibrations to model linear elastic behavior. That's like stretching a spring and letting it spring back (remember Hooke's Law?). Things get interesting in the non-linear part of deformation when we permanently deform things. Imagine that you stretch the spring above much further than it was designed to be. The nice linear-elastic behavior would go away and plastic deformation would start. You'd deform the spring and it wouldn't ever spring back the same way it was again. Eventually, as you keep stretching, the spring would break. The non-linear parts of deformation are really important to us in earthquake science for obvious reasons. For active seismic survey people, the small strain approximation isn't bad though.

Another issue I can imagine is combining video from different orientations to recover the full behavior of the material. I don't know all of the details of Abe's algorithm, but I think it would have problems with anisotropic materials. Those are materials that behave differently in different directions. Imagine a cube that can be easily squeezed on two opposing faces, but not easily squeezed on the others. Some rocks behave in such a way (layered rocks in particular). That's really important since they are also common rocks for hydrocarbon operations to target! Surrounding the sample area with different views (video or seismic) and using all of that information should do the job, but it's bound to be pretty tricky.

The last thing that strikes me is processing time. I don't think I've seen any quotes of how long the processing of the video clips took to recover the audio. While I don't think it's ludicrous, I think the short clips could conceivably take a few hours per every 10 seconds (this is a guess). For large or long duration geo experiments that could become an issue.

So what's the end story? Well, I think this is a technology that we haven't seen the last of. The techniques are only going to get better and processors faster to let us do more number crunching. I'm curious to watch this develop and try to apply it in some basic experiments and see what happens. What would you try this technique on? Leave it in the comments!

Listening to Sound with a Bag of Chips

mit-visual-microphone

Today I wanted to share some thoughts on some great new technology that is being actively developed and uses cameras to extract sound and physical property information from everyday objects. I had heard of the visual microphone work that Abe Davis and his cohorts at MIT were working with, but they've gone a step further recently. Before we jump in, I wanted to thank Evan Bianco of Agile Geoscience for pointing me in this direction and asking about what implications this could have for rock mechanics. If you like the things I post, be sure to checkout Agile's blog!

Abe's initial work on the "visual microphone" consisted of using high speed cameras and trying to recover sound in a room by videoing an object like a plant or bag of chips and analyzing the tiny movements of the object. Remember sound is a propagating pressure wave and makes things vibrate, just not necessarily that much. I remember thinking "wow, that's really neat, but I don't have a high speed camera and can't afford one". Then Abe figured out a way to do this with normal cameras by utilizing the fact that most digital cameras have a rolling shutter.

Most video cameras around shoot video at 24 or 32 frames (still photos) each second (frames per second or fps). Modern motion pictures are produced at 24 fps. If we change the photos very slowly and we just see flashing pictures, but around the 16 fps mark our mind begins to merge the images into a movie. This is an effect of persistence of vision and it's a fascinating rabbit hole to crawl down. (For example, did you know that old time film projectors flashed the same frame more than once? The rapid flashing helped us not see flickers, but still let the film run at a slower speed. Checkout the engineer guy's video about this.) In the old days of film, the images were exposed all at once. Every point of light in the scene simultaneously flooded through the camera and exposed the film. That's not how digital cameras work at all. Most digital cameras use a rolling shutter. This means the image acquisition system scans across the CCD sensor row by row (much like your TV scans to show an image, called raster scanning) and records the light hitting that sensor. This means for things moving really fast, we get some strange recordings! The camera isn't designed to capture things that move significantly in a single frame. This is a type of non-traditional signal aliasing. Regular aliasing is why wagon wheels in westerns sometimes look like they are spinning backwards. Have a look at the short clip below that shows an aircraft propeller spinning. If that's real I want off the plane!

But how does a rolling shutter help here? Well 24 fps just isn't fast enough to recover audio. A lot of audio recordings sample the air pressure at the microphone about 44,200 times per second. Abe uses the fact that the scanning of the sensor gives him more temporal resolution than that rate at which entire frames are being captured. The details of the algorithm can get complex, but the idea is to watch an object vibrate and by measuring the vibration estimate the sound pressure and back-out the sound.

Sure we've all seen spies reflect a laser off glass in a building to listen to conversations in the room, but this is using objects in the room and simple video! Early days of the technology required ideal lighting and loud sounds. Here's Abe screaming the words to "Mary had a little lamb" at a bag of potato chips for a test.... not exactly the world of James Bond.

Screenshot 2015-09-12 13.19.56

 

As the team improved the algorithm, they filmed the bag in a natural setting through a glass pane and recovered music. Finally they were able to recover music from a pair of earbuds laying on a table. The audio quality isn't perfect, but good enough that Shazam recognized the song!

The tricks that the group has been up to lately though are what is the most fascinating. By videoing objects that are being minority perturbed by external forces, they are able to model the object's physical properties and predict its behavior. Be sure to watch the TED talk below to see it in action. If you're short on time skip into about the 12 minute mark, but really just watch it all.

By extracting the modulus/stiffness of objects and how they respond to forces, the models they create are pretty lifelike simulations of what would happen if you exerted a force on the real thing. The movements don't have to be big. Just tapping the table with a wire figure on it or videoing trees in a gentle breeze is all it needs. This technology could let us create stunning effects in movies and maybe even be implemented in the lab.

I've got some ideas on some low hanging fruit that could be tried and some more advanced ideas as well. I'm going to talk about that next time (sorry Evan)! I want to make sure we all have time to watch the TED video and that I can expand my list a little more. I'm thinking along the lines of laboratory sensing technology and civil engineering modeling on the cheap... What ideas do you have? Expect another post in the very near future!