Category Archives: Productivity

Fix What Bugs You (aka Lean) - Tool post wrench

Awhile back I was listening to Episode 101 of The Productivity Show in which Paul Akers of FastCap was being interviewed about lean. Lean is one of those buzz words that gets thrown around without much thought, but he explained it in a simple way that the marketers would never approve of and some would argue is just common sense. Paul said lean means that you should "fix what bugs you". While many of us do that already, I've tried to keep that mantra going as I've been working on projects or just going through daily life. That coffee container that I hated dealing with every morning because it was slow? Gone and replaced with a fast open/close sealed container. That one annoying error message on my laptop I've been closing out every time I boot up for the last year? 15 minutes with google and I'll never click it again. These are all little examples of realizing that something was, and had been, annoying me and stopping what I was doing to fix it. Paul talks about this idea some in a video (below), but I wanted to share an example of a recent thing that bugged me and I fixed with about an hour's worth of effort.

When I'm using tools, I want to be efficient. I want to have the right tool for the right job and do the job well. Recently I was using a lathe to make some parts and noticed that they had been using a large crescent wrench to adjust the tool post  on the machine. The wrench was about 2 feet long and was heavy. Not only that, it encouraged over tightening of the nut with its excess mechanical advantage. Every time I changed the tool post, I pulled out the big wrench and carefully adjusted things. This got old very quickly.

I stopped into Tractor Supply Company and bought a 1.5" combination wrench for $15. I took the wrench back to the shop and put it on the adjustment nut. It was still a bit long and interfered with the lever on the tool post lock, so I still wasn't very happy. For $15, I decided it was time to fix this problem once and for all.

First I got out the chop saw and sawed off the open end that I didn't need. This took just a few seconds and already had the handle at a more suitable length.

Next, I grabbed the torch and heated the tool until I could bend the wrench so that the handle was at an angle. It took a few trips across the shop, trying the wrench on the machine, to get the angle just right.

I could have stopped, but I wanted to have a nice rounded handle end that was smooth to slide your hand off and wouldn't scratch the back of your hand if you brushed against it. After some grinding, filing, and emery paper work I had a really nice smoothed surface. A bit of time on the wire wheel cleaned up discoloration where I had heated the tool.

And with less than an hour's worth of time, I had a nice little tool to set the tool post. Now I'm less resistant to readjusting and therefore can make better parts faster. Lazy? Maybe, but it sure is nice to not dislike the tool and I've made it harder to make a goof like over tightening.

Is this rocket science? No. In fact, Keith Fenner has done this as well (video below). The point is, look around and find a few things this week that bug you and take 5-60 minutes and fix them. The time and frustration saved will be worth it!

Open Science Pt. 2 - Open Data

For the second installment in our summer open science series, I’d like to talk about open data. This could very well be one of the more debated topics; I certainly know it always gets my colleagues opinions exposed very quickly in one direction or the other. I’d like to think about why we would do this, methods and challenges of open data, and close with my personal viewpoint on the topic.

What is Open Data?

Open data simply means putting data that supports your scientific arguments in a publicly available location for anyone to download, replicate your analysis, or try new kinds of analysis. This is now easier than ever for us to do with a vast array of services that offer hosting of data, code, etc. The fact that every researcher will likely have a fast internet connection makes most arguments about file size invalid, with the exception of very large (100’s of gigabytes) files. The quote below is a good starting place for our discussion:

Numerous scientists have pointed out the irony that right at the historical moment when we have the technologies to permit worldwide availability and distributed process of scientific data, broadening collaboration and accelerating the pace and depth of discovery…..we are busy locking up that data and preventing the use of correspondingly advanced technologies on knowledge.

- John Wilbanks, VP Science, Creative Commons

Why/Why-Not Open Data?

When I say that I put my data “out-there” for mass consumption, I often get strange looks from others in the field. Sometimes it is due to not being familiar with the concept, but other times it comes with the line “are you crazy?”  Let’s take a look at why and why-not to set data free.

First, let’s state the facts about why open data is good. I don’t think there is much argument on these points, then we’ll go on to address more two-sided facets of the idea. It is clear that open data has the potential to increase the friendliness of a group of knowledge workers and the ability to increase our collaboration potential. Sharing our data enables us to pull from data that has been collected by others, and gain new insights from other’s analysis and comments on our data. This can reduce the reproduction of work and hopefully increase the numbers of checks done on a particular analysis. It also gives our supporters (tax payers for most of us) the best “bang for their buck.” The more places that the same data is used, the cost per bit of knowledge extracted from it is reduced. Finally, open data prevents researchers from taking their body of knowledge “to the grave” either literally or metaphorically. Too often a grad student leaves a lab group to go on in their career and all of their data, notes, results, etc that are not published go with them. Later students have to reproduce some of the work for comparison using scant clues in papers, or email the original student and ask for the data. After some rummaging, they are likely emailed a few scattered, poorly formatted spreadsheets with some random sampling of the data that is worse than no data at all. Open data means that quality data is posted and available for anyone, including future students and future versions of yourself!

Like every coin, there is another side to open data. This side is full of “challenges.” Some of these challenges even pass the polite term and are really just full-blown problems. The biggest criticism is wondering why someone would make the data that they worked very hard to collect out in the open, for free, to be utilized by anyone and for any purpose. Maybe you plan on mining the data more yourself and are afraid that someone else will do that first. Maybe the data is very costly to collect and there is great competition to have the “best set” of data. Whatever the motivation, this complaint is not going to go away. Generally my reply to these criticisms goes along the lines of data citation. Data is becoming a commodity in any field (marketing, biology, music, geology, etc). The best way to be sure that your data is properly obtained is to make it open with citation. This means that people will use your data, because they can find it, but provide proper credit. There are a number of ways to get your data assigned a digital object identifier (DOI), including services like datacite. If anything, this protects the data collector by providing a time-stamp of doing data collection of phenomena X at a certain time with a time-stamped data entry. I’m also very hopeful that future tenure committees will begin to recognize data as a useful output, not just papers. I’ve seen too many papers that were published as a “data dump.” I believe that we are technologically past that now, if we can get past "publish or perish," we can stop these publications and just let the data speak for itself.

Another common statement is “my data is too complicated/specialized to be used by anyone else, and I don’t want it getting mis-used.” I understand the sentiment behind this statement, but often hear it as “I don’t want to dedicate time to cleaning up my data, I’ll never look at it after I publish this paper anyway.” Taking the time to clean up data for it to be made publicly available is when you have a second chance to find problems, make notes about procedures and observations, and make it clear exactly what happened during your experiment (physical or computational). I cannot even count the number of times I’ve looked back at old data and found notes to myself in the comments that helped guide me through re-analysis. These notes saved hours of time and possibly a few mistakes along the way.

Data Licensing

Like everything from software to intellectual property, open-data requires a license to work. No license on data is almost worse that no data at all because the hands of whoever finds it are legally bound to do nothing with it. There is even a PLOS article about licensing scientific software that is a good read and largely applies to data.

What data licensing options are available to you are largely a function of the country you work in and you should talk with your funding agency. The country or funding agency may limit the options you have. For example, any US publicly funded research must be available after a presidential mandate that data be open where possible “as a public good to advance government efficiency, improve accountability, and fuel private sector innovation, scientific discovery, and economic growth.” You can read all about it in the White House U.S. Open Data Action Plan. So, depending on your funding source you may be violating policy by hoarding your data.

There is one exception to this: Some data are export controlled, meaning that the government restricts what can be put out in the open for national security purposes. Generally this pertains to projects that have applications in areas such as nuclear weapons, missile guidance, sonar, and other defense department topics. Even in these cases, it is often that certain parts of the data may still be released (and should be), but it is possible that some bits of data or code may be confidential. Releasing these is a good way to end up in trouble with your government, so be sure to check. This generally applies to nuclear and mechanical engineering projects and some astrophysical projects.

File Formats

A large challenge to open data is the file formats we use to store our data. Often times the scientist will use an instrument to collect their data that stores information in a manufacturer specific, proprietary format. It is analyzed with proprietary software and a screen-shot of the results included in the publication. Posting that raw data from the instrument does no good since others must have the licensed and closed-source software to even open it. In many cases, the users pay many thousands of dollars a year for a software “seat” that allows them to use the software. If they stop paying, the software stops working… they never really own it. This is a technique that the instrument companies use to ensure continued revenue. I understand the strategy from a business perspective and understand that development is expensive, but this is the wrong business model for a research company. Especially considering that the software is generally difficult to use and poorly written.

Why do we still deal in proprietary formats? Often it is because that is what the software we use has, as mentioned above. Other times it is because legacy formats die hard. Research groups that have a large base of data in an outdated format are hesitant to update the format because it involves a lot of data maintenance. That kind of work is slow, boring, and unfunded. It’s no wonder nobody wants to do it! This is partially the fault of the funding structure, and unmaintained data is useless data and does not fulfill the “open” idea. I’m not sure what the best way to change this idea in the community is, but it must change. Recent competitions to “rescue” data from older publications are a promising start. Another, darker, reason is that some researches want to make their data obscure. Sure, it is posted online, so they claim it is “open”, but the format is poorly explained or there is no meta-data. This is a rare case, but in competitive fields can be found. This is data hoarding in the ugliest form under the guise of being open.

There are several open formats that are available for almost any kind of data including plain text, markdown, netCDF, HDF5, and TDMS. I was at a meeting a few years ago where someone argued that all data should be archived as Excel files because “you’ll always be able to open those.” My jaw dropped. Excel is a closed, XML based, format that you must have a closed-source program to open. Yes, Open Office can open those files, but compatibility can be sketchy. Stick to a format that can handle large files (unlike Excel), supports complex multi-dimensional data (unlike Excel), and has many tools in many languages to read/write it (unlike Excel).

The final format/data maintenance task is a physical format concern. Storage media changes with time. We have transitioned from tapes, floppy disks, CDs, and ZIP disks to solid state storage and large external hard-drives. I’m sure some folks have their data on large floppy disks, but haven’t had a computer to read them in years. That data is lost as well. Keeping formats updated is another thankless and unfunded task. Even modern hard-drives must be backed up and replaced after a finite shelf life to ensure data continuity. Until the funding agencies realize this, the best we can do is write in a small budget line-item to update our storage and maintain a safe and useful archive of our data.

Meta-Data

The last item I want to talk about in this already long article is meta-data. Meta-data, as the name implies, are data about the data. Without the meta-data, most data are useless. Data must be accompanied by the experimental description, relevant parameters (who, when, where, why, how, etc), and information about what each data item means. Often this data lives in the pages of the laboratory notebooks of experimenters or on scraps of paper or whiteboards for modelers. Scanners with optical character recognition (OCR) can help solve that problem in many cases.

The other problems with meta-data are human problems. We think we’ll remember something, or we don’t have time to collect it. Anytime that I’ve thought I didn’t have time to write good notes, I payed by spending much more time after the fact figuring out what happened. Collecting meta-data is something we can’t ever do enough of and need to train ourselves to do. Again, it is a thankless and unfunded job… but just do it. I’ve even just turned on a video or audio recorder before and dictated what I’m doing. If you are running a complex analysis procedure, flip on a screen capture program and make a video of doing it to explain it to your future self and anyone else who is interested.

Meta-data is also a tricky beast because we never know what to record. Generally, record everything you think is relevant, then record everything else. In rock mechanics we generally record stress conditions, but never think to write down things like temperature and humidity in the lab. Well, we never think to until someone proves that humidity makes a difference in the results. Now all of our old data could be mined to verify/refute that hypothesis, except we don’t have the meta-data of humidity. While recording everything is impossible, it is wise to record everything that you can within a reasonable budget and time commitment. Consistency is key. Recording all of the parameters every time is necessary to be useful!

Final Thoughts

Whew! That is a lot of content. I think each item has a lot unsaid still, but this is where my thinking currently sits on the topic. I think my view is rather clear, but I want to know how we can make it better. How can we share in fair and useful ways? Everyone is imperfect at this, but that shouldn’t stop us from striving for the best results we can achieve! Next time we’ll briefly mention an unplanned segment on open-notebooks, then get on to open-source software. Until then, keep collecting, documenting, and sharing. Please comment with your thoughts/opinions!

How I Design a Talk

This year I'm co-chairing a session at the American Geophysical Union meeting called "Teaching and Career Challenges in Geoscience." We have been maintaining a blog for the session at keepinggeologyalive.blogspot.com. I wrote a post that I wanted to cross-post here in hopes that you too may find a few tips to help with the next presentation you need to give.

Hello everyone! While I was preparing my talk, I thought I would share my process in the hope that maybe someone will find a useful nugget or two. There are lots of great resources out there. Books like Pitch PerfectTalk Like TED, and the MacSparky Presentations Field Guide are great places to start. With AGU only a couple of weeks away, I wanted to highlight a few ideas on presentation planning.First, close PowerPoint or Keynote. The presentation software is not the place to start preparing a presentation. I like to sit down in a comfortable spot with a stack of index cards and a mug of coffee. While I love technology as a tool, it's just too early. I write out one major thought on the top of each card and put supporting material on as a list. For a short talk, like the pop-ups, this is just a few cards, but I've had stacks over 2 cm high for longer talks. I put everything I might want to bring up on these, pruning the content comes later.  After my cards are made, I lay them out on a big table (or the floor) and play with the ordering. I'll ad-lib sections of a fake talk and see if two thoughts can flow smoothly into each other. Once I'm happy with the general layout, then I'm ready to move on.

After playing with index cards, I'll let technology in. I like using OmniOutliner to help here. I put my index cards into a digital outline. Lots of people start here, which is fine. I like starting on paper because I can sketch things out and feel less constrained. Index cards also don't have email notifications that interrupt your thinking. In OmniOutliner, I break out my thoughts into short bullets. I can drag in content such as a photo of a sketch I think may turn into a graphic, sound bytes of an idea, or quotes I want to include.

Now it is time to decide on supporting graphics. I have an idea of what I'm going to say, so what visual aides will help tell the story? Your slides are not an outline and are not meant to guide you through the content. You and the slides together will guide an audience through your work in a logical way. Graphics can be photos, graphs of data, schematic diagrams, anything! Personally, I like make my graphics using an assortment of applications like PythonAdobe Illustrator, or OmniGraffle. Making graphics is a whole other series of books that you could dive into, including the great books by Nathan Yau: Visualize This and Data Points.

Finally, it's time to make your slides. I follow the Michael Alley approach of a slide with a (nearly) complete sentence at the top, followed by graphics. The fewer things that the audience has to read, the closer they will be listening to what you have to say. If you need to document your material to hand-out, produce a small one or two page text document with the necessary graphics (an idea from Edward Tufte). Again, the slides should not be the presentation, but support for it.  If you are stuck for ideas on slide design, head over to Garr Reynold's blog Presentation Zen. Garr has some great examples, as well as his own books.

My last tip regards the ends of your presentation. The beginning and the ending are incredibly important. The beginning is where you gain or loose the audience, and the end is where you make sure that their time was well spent. Nail these. I don't script presentations, it sounds too robotic, but the first and last 30 seconds are written down and well thought out.

I can't wait to hear what everyone has to share and I hope that some of these tips and resources are useful in your preparation!

The Rise of the "Expert Generalist"

Swiss Army Knife

Image: http://www.nature.com/

I've always appreciated the value of having a very broad range of knowledge, but recently I've observed many cases that reminded me how important it is. Growing up I worked on tractors and engines and rebuilt many mechanical devices. Later I learned how to machine metal and weld. As it turned out all of those skills and the knowledge gained have been incredibly helpful in graduate school since I happen to work with large hydraulic and mechanical systems that have all custom parts!

It turns out that as our fields all become more connected through increased interdisciplinary collaboration we all must become an "expert generalist". As geoscientists, we are always faced with writing new code, logging new types of data, or becoming GIS experts. Knowing just a little about many fields opens up entirely new ways that you can start to approach a problem. If that approach looks promising then you can become an "expert" or consult with one, but this novel approach would likely have remained hidden without any knowledge of the field.

The main message of the 99u article (linked at the bottom) is:

One thing that separates the great innovators from everyone else is that they seem to know a lot about a wide variety of topics. They are expert generalists. Their wide knowledge base supports their creativity.

As it turns out, there are two personality traits that are key for expert generalists: Openness to Experience and Need for Cognition.

Let's take a look at the two qualities mentioned and see how we can apply them.

Openness to Experience
Creating new content and ideas is really just a merging of concepts that we already know into a complete framework or mental model of examining the problem at hand. That means that we need a large body of knowledge to draw from. While this sounds like a good idea in practice, it isn't easy to do. We have to be open to meeting with people and learning about concepts that may seem completely irrelevant right now. We have to read papers that are out of our fields and realize that we all work on the same field, just different parts of it.

In an effort to broaden my knowledge I've added a component to my Friday review process: the fun paper reading. Every Friday morning while organizing the end of the week and setting up the next week I find a paper that is out of my research area and read it. These papers range from the geometry of parallel parking, to lightning science, to the fluid dynamics involved with sinking bubbles in a pint of Guinness.

Need for Cognition
The second characteristic described is one that most of us already have. It is the drive to be that person always asking "why?". When driving down the road on a hot summer day and you see the road "shimmer" do you keep going or wonder what is happening? Most of us would go look it up and read all about autoconvection. While some may call this going down the Wikipedia rabbit hole, it is essential to build time into our schedules to allow this kind of free exploration.

What can we as geoscientists take from all of this? We should always be broadening our horizons, making many connections with people in all areas, and not forget that we are all working on the same problem... understanding our world.

99u: Picasso Kepler and the Benefits of Being an Expert Generalist

NSF Graduate Fellowships - Some Thoughts and Tips

While this post may not appeal to the general audience, I thought it would be useful because it is an important topic to any senior undergraduate or first/second year graduate student. Today I want to briefly tell you my experience applying for the NSF Graduate Fellowship in 2012 and 2013.  I learned a lot in the process of applying for this prestigious fellowship and hope that I can pass some of that knowledge down!

Application 2012 - No award

My first year at Penn State, I applied with the traditional three documents of research statement, personal statement, and research proposal.  I sought the edits of those who had been awarded the fellowship in the past and thought I had a convincing packet assembled.  After reading, re-reading, and re-reading, it was time to submit.  I submitted the application, then made the mistake of reading over it again a week later and finding things I wished I had changed.  Months went by and seemed to drag on until the award announcements came.  I was not selected for an award.  While I was of course disappointed, it was time to kick it into high gear and make an even better application for my next (and final) try.

Application 2013 - Award Offered

For my second application I had lots of debates with myself.  Should I change my research proposal topic? Were my personal and research statements too similar? How can I improve the writing? Should I include figures?

To settle these debates, I turned to the wealth of online information that I hadn't sought out the previous year.  I talked with those who had received the award, I read funded research proposals from various professors and researchers, and I went down to the bare bones of the document.  While I'll discuss specific tips below, I'll just say that I started earlier, took more pauses between writing sprints, and sought more people for reading.

My tips

In writing two proposals, I learned a lot about how to effectively structure my research and emphasize the specific angle of attack I'll take on a research question and why it's different.  Here are some things I found to be helpful:

MY PROCESS

  • Start Early - Think it's too soon? Wrong! You need lots of time to organize your thoughts, revise, rewrite, and think about your application.
  • Read the Announcement - Print out the announcement document and read it critically.  You can look at the 2014 announcement here.  Don't just read it, mark on it. Highlight what they specifically are looking for, underline the buzzwords and key phrases of the call.  Also, draw a big box around the application deadline and then plan to beat it by one week.  Why? Computer problems, server crashes, unexpected medical emergency, etc.  You don't know what could happen, so make sure that your application gets in early!
  • Make a FastLane Account - Go to the online application and make an account.  Get familiar with FastLane, you'll use it for most all of your NSF proposals unless they change sometime in the future! Look at the application.  Go ahead and fill in the boxes with your name, address, etc.  Now you can mark down progress on your application and have momentum to move forward with the hard work.
  • Write the Requirements for your statements out on Paper - This one is huge.   In the application, pull up the research proposal and background/personal statement "prompts."  Print them, read them many times, and finally write them down on a notepad.  Break the prompt up into small chunks and then think about how to answer each piece.  Don't worry about flow, just think.
  • Brain Dump - Now write each one of the pieces of the question on the top of a page and begin to outline the points that you will make to address it.  Again, don't worry about order or how many points you have! Just write and write and write.
  • Organize into an Outline - Take a break, a day or so, then come back to your brain dump afresh and think about how you can piece it together into one coherent story - your story.  The story of a proposed research project and the story of you and your life in science.
  • Make a Draft - It does not need to be pretty, organized, the right length, etc.  Just get complete sentences onto the page.  Do this on paper or in a plain text editor.  Don't worry about formatting, length, spacing, margins... Those are things for later in a word processor.  I like using Textastic, Sublime, TextWrangler, or Editorial.
  • Read it and Have Lots of People Read it - Don't be afraid to ask everyone to read and edit your document.  Do not ask them to re-write the document for you! Remember this needs to come from your brain, but it is fine to gather suggestions and comments.  I also went to the graduate writing center and had some great suggestions from the coach there.  As scientists, we are not used to marketing ourselves and we often think the need for our research is obvious.... That won't work.
  • Talk to Your Reference Writers - You'll need letters of reference.  These take lots of time to write, so make things easy on your mentors and writers.  They have done a lot for you and are about to help out again.  I went through the application, figured out what I thought would be important to my application reviewers and then composed an email to my writers (see below).
  • Do NOT Cram - Whitespace is a dear friend to someone who is reading many pages of documents... like your judges.  Don't pack every single word you possibly can into the pages.  Economy of words shows great thought and restraint when writing.  Edit down over and over.  Leave white space between paragraphs.  If you use figures, text wrapping is a fine way to reclaim space, but leave a sufficient margin.  Look at books and other professionally formatted documents for inspiration.

MY APPLICATIONS FROM BOTH YEARS

Here are links to documents I produced for both applications, of course don't plagiarize, but hopefully they are helpful!

NSF 2012 (No Award)
Personal Statement | Previous Research | Proposed Research

NSF 2013 (Award Offered)
Research Statement | Personal & Background Statement |
Letter to Reference Writers

LINKS

There are several other helpful webpages out about the application process.  Remember, what you read on the program site is the final word, but these pages have more useful tips.

GRFP Essay Insights (Missouri)
Alex Lang's Website
Jennifer Wang's Website
Reid Berdanier's Website
The Official NSF GRFP Page
NSF PAPP Guide Book

Remember, if you don't get the award, take the feedback you get and start improving! Try, try again and don't be afraid to seek help from mentors, writers, friends, and family.  Please leave any useful comments below. Best of luck!

"The 4-Hour Work Week" - Thoughts on the Best Selling Book

In a best selling book "The 4-Hour Work Week", author Tim Ferriss argues for working less and experiencing life more.  While this concept is intriguing and what many corporate types long for, I was curious if and how Mr. Ferriss' ideas could convert over to academia.

In academics, many of us (myself included) enjoy what we do and spend much much more than 40 hours a week "working".  While we may not always be in the lab or in the office, we generally think about problems all day, all night, etc.  Why do we do this? Is it because we are addicted to work? Not really, we do it because of an intense interest and desire to solve problems.  You may think that this automatically throws the concepts of the book entirely out the door, which is incorrect! Tim Ferriss admits that he really does work more than 4 hours a week, but his main argument is cutting unnecessary things out of your schedule.

The concept of "work for work's sake" is hammered throughout the book as something to instantly eliminate.  Do you have items that you do to avoid tackling the big projects? For example, sending out a dozen "check up" emails that don't really need to be sent or could be addressed with a short face-to-face kills an hour or so and helps you avoid doing real items that are important but difficult.  This concept can come right on over to academics and graduate students are famous for ninja like procrastination skills.

To achieve the freedom described in the book , Ferriss outlines a plan on starting a business, automating it, and letting it produce income with minimum input from you... Sound too good to be true? It probably is and certainly doesn't sound like academia, but with some adapting I think we can focus it down and maybe consider a few of the sub-points in detail.

Starting a business to market a product: not so applicable to graduate students, we make ideas and concepts that can't be outsourced or sold. Later if your idea becomes a product, instrument, etc? Maybe. Overall I'd say this part isn't very applicable and you'll never have to worry about distribution houses and merchant accounts.

Automation: YES! This is something that many graduate students, professors, and even undergraduates can use more of.  My motto is "if you have to do it more than 10 times, write a program".  In some cases that is overkill, but a few simple things like automating your email rules, writing a data plotting routine instead of pointing and clicking to make the same plots, or even automating bill pay can save you enormous amounts of time.

First off for email: my personal plan follows the David Allen "Getting Things Done" philosophy of inbox zero.  At the end of everyday when I leave for home my inbox (electronic and physical) has exactly zero items in it.  If the item takes less than 2 minutes I do it immediately, if it takes more it gets clipped into OmniFocus and the email archived or deleted.  You can also improve your spam filters and use smart labeling/smart inbox depending your email setup.  All my inboxes (about half a dozen addresses) forward to a gmail account because I think they are the most versatile provider.  I use Sparrow as a client, but the web interface is fine as well.  Check out the wealth of information including Gmail Ninja.

Ferriss encourages readers to check email only twice a day and then cut down from there: this simply isn't possible in my field.  Ferriss even recommends paying personal assistant services to do things like check email, make basic appointments/decisions, etc.  Assistants aren't really useful for more than business type emails though and as a graduate student I live in the world of mostly non-FAQ style messages. Could I check email less than I do currently? You bet, and I'm working on that.  Checking email the first thing in the morning, while discouraged by many experts is still essential for me.  I want to know what's flying my way for the day that I need to add to the schedule.  Playing with timers that shut down my email for 30-90 minute focus sessions is an experiment I plan to try in the coming year.  (Ferriss would call this a "lifestyle experiment", which is an excellent term in the eyes on an experimentalist.)  Use a stats service to track your email response time, length, thread length, etc.  If you're sending an email every 20 minutes thats in a long thread of long messages just pick-up the phone and call or Skype the parties involved.  I know, I hate phone calls as well, but it saves massive amounts of time.

Further automation (some from Ferriss and some I've picked up) includes Amazon auto shipment of essential items and automatic bill pay.  The automatic shipment of Amazon products on a schedule not only saves me money directly, but time.  I no longer run to at 8:30 pm several times a year to pickup essentials I've forgotten.  Things like deodorant, toilet paper, toothpaste, printer paper, ink, toner, juice, snacks, and just about anything non-perishable can be scheduled to ship at a given interval.  I see the email that a shipment is on the way and leave a note to drop the package at my door instead of the office for the UPS delivery man/woman.  It sounds silly, but it really does save me trips and thinking about repetitive tasks.  Options to request and additional shipment and skip a shipment exists if you over/under estimated how often you use a product.  Automatic bill pay is a similar process: the computer does it and I just supervise by looking at my online banking site every week and noting account action emails. (If you happen to be in the area, PNC bank has great online tools.)

Meetings: Ferriss advises us to avoid these at all costs.  While that may be a good bit of advice, spending time with fellow scientists is how new projects emerge, so cutting isn't as easy as in business.  I do like the idea of having someone requesting a meeting (be it a student or colleague) send some discussion points ahead of time to the group.  This means everyone knows what's on the table and conversation is less likely to drift.  Hit the points, write down new points, but try not to chase them.  Save the idea batting for my favorite meeting idea: agree to meet later for a drink and bring notebooks.  Geologists are famous for their love of beer and having a beer is a relaxed environment that allows ideas to flow onto paper and doesn't eat into the "business hours" of trying to knock off action items.

While there are more points in the book, I want to mention just one more: mini-retirements.  Ferriss says why retire? just take breaks from the work and spread experiences throughout your lifetime.  This is a great idea and very easy to do in academics.  We get mini breaks with semester schedules, but it's not uncommon to go on a research trip or go to a conference overseas and spend an extra week or two.  Several people in our department end up living overseas for months at a time to do their work and get new experiences.  This concept is on my list of experiments to try, but I've noticed working in intense sessions and then really taking small breaks during the day/week has already helped.

In the end, I recommend you read the book.  Maybe grab it from your library and see what you think.  Ferriss seems like a rather abrasive personality, but you don't have to be to adopt a few of the concepts in the book.  It really boils down to the same idea though: make a get of goals, get to them in the best way possible, and don't waste time, recreation or work.

The Scientific Workspace

Today I'd like to discuss the evolution of the scientific workspace, but before that I need to address a few comments and recent happenings.  The fluxgate magnetometer project is done, I decided to not build a bandpass filter in the unit.  Hopefully I can get the schematic drawn up nicely and post a PDF on my website content section.  Website, oh yes, there is a new website for my academic life.  I'll still be doing blog posts here, but the website will have all my static content, research, etc.



Awhile back I read an interview with Adam Savage of the popular discovery show Mythbusters.  This interview was mostly getting at how Adam works and the productivity tools he utilizes.  The question/answer that caught my attention was the following:


Q: What's your workspace setup like?

A: I have several desks: One at home, one at work, and one in my own shop. I spend little time at any of them. My workplace is wherever I'm making something, which could be in a field in gold country, or in an abandoned warehouse on a military base.

The part of the statement in bold is what I want to discuss.  Scientists are often viewed as working hard in their lab with test tubes, beakers, and bunson burners (as evidenced by a colleague asking his geoscience intro class to draw a scientist on their first day of class).  This view is really valid for only a small sector of the sciences; as geologists we are often making a workspace in the field on an outcrop of rock, working on a laptop in the office or at a coffee shop, or doing an experiment in the lab. So what is the workspace and how has it changed? 




First: Do people (not just scientists or geologists) view the workspace differently than they did in the 1960's? I think so.  With the advent of mobile computing and being able to walk around with 1000+ PDF files and books on an iPad the office is becoming less and less important.  Until the late 90's the office was the place where all your paper lived, without this support it was impossible to do much work.  Now that this isn't the case, I believe the office is becoming occupied more infrequently and being replaced with the mobile office.  The internet is also making telecommuting easier each year.  While in Houston I could occasionally see updates to spacecraft flight software coming into the repository from a colleague who programed at a Starbucks frequently.   Just a few years ago that was impossible and during the Apollo days out of the question.


Next, can the creative (yes, scientists are creatives that won't admit it) work in a single workspace like an office or lab? While they could this is a severely limiting strategy.  There are several times I've found it useful to go into the shop or lab and tinker with things and setup a laptop and work there.  Sometimes I spend the majority of the week at the desk, but sometimes I'll setup for a paper reading or programming marathon in another building or at a restaurant.  


Why would you want to work somewhere that doesn't have the big monitor and files you enjoy at your desk? Chance encounters.  While working in the traditional office should still be a component of our days, some of the most useful conversations I've had occurred with people in other buildings on campus or at a coffee shop.  

For example: in December of last year I was working in a tea shop near Denver programming an image analysis code (for the laser cave mapper).  While coding away the owner of the shop (Damon) came over to refill my glass and noticed I was writing software on a Mac.  He inquired about what I did, asked if I could answer a Mac question for him, and then from the view of an outsider to the geosciences made a comment that ended up making me think a lot about other applications for this technology.  These kind of chance encounters have happened several times and even ended up in some good professional relationships being formed.


The physics rock star Richard Feynman would have loved this notion of many workspaces I believe.  Feynman loved new ways to look at things and could be looking at a complex problem from a new angle while in the outdoors, at a blackboard, or submerged in a tub of water on hallucinogenic drugs (to see Feynman's unique mind I highly suggest his book Surely You're Joking Mr. Feynman).  


I suppose the biggest point I want to make with these examples and from the quote is that as scientists it's easy to get comfy in our office surrounded by a couple of giant computer screens and full of distractions.  We shouldn't throw that office out, but be sure to go into the lab (even if you're not an experimentalist) and tinker, go into the field and observe connections, or go to a coffee shop and make that a temporary office.  Anywhere can be your workspace and it's enriching to switch between them and look at the same problem with another set of tools and surroundings. 



Rule #32 - Enjoy the Little Things (and why it's scientifically sound)

I'm going to deviate from my normal format of the physical sciences and take a short look into the human brain.  Anyone who has seen the recent comedy 'Zombieland' probably remembers the rules that the main character (Columbus) had.  If fact, Columbus had many rules including: cardio, double tap, beware of bathrooms, don't be a hero, etc that he used to survive Zombieland.

Today I came across an article about happiness and it made me recall the rule he learns from a redneck type called Tallahassee.  The rule (#32 in his book) is 'Enjoy the Little Things'.

The Psychology Today article (found here) examines the negative bias of our brains.  Scientists have shown time and again that negative events have more influence on us that positive events.  While this is likely a remnant of what helped us survive early on, it can sometimes get in the way of happiness today.  Just think about the last time one bad event ruined a whole day, week, month, or in extreme cases, ruined your year.

A group of researchers even examined married couples and found that those who were long-term and happy had a ratio of happy to bad events of about 5:1.  Couples that had less, especially much less, were more likely to get separated in the near future.  For years we've heard that marriage isn't 50:50, but who would have guessed it's approximately 83.3:16.7?

After the study, scientists have decided that we are better of to enjoy more small happy events than just a few great big happy events.  While doing something big like buying a new car may make you happy, it can be negated by just a few simple bad events.  If you had many small happy events in life, you would likely be a happier person.  To sum it up: enjoy the little things!