Detecting Near-Earth Asteroids

The summer after I graduated from college I began working at JPL (the Jet Propulsion Laboratory at Cal Tech, funded by NASA). A professor of mine emailed me about 2 weeks before graduation and asked me if I would be interested in working for JPL through him for the summer, basically setting up an observation track to take data that would be analyzed to find, essentially, impossible to find objects.

These impossible to find objects are generally asteroids. Asteroids, obviously, have no natural light of their own (aka not a star) and they’re generally fairly small (aka not a planet) and they’re generally moving fairly quickly. Because of these three things, asteroids are generally some of the hardest to detect objects in the solar system. Usually when dealing with extremely faint objects, the answer is simply a longer exposure (basically you open up the camera lens and let light filter in for a longer period of time). You can also simply take one hundred exposures then flatten them all into one image. This makes certain things appear brighter because the light from each pixel on the image are added together. Great, problem solved right? No. Not at all. See, a longer exposure only works when the object is more or less not moving at all. This works great for extremely distant objects because they more or less have very little motion due to the enormous distance from the Earth. Asteroids, because they are so close, appear to move like crazy.

Ok, we can solve that problem as well. How do you take a longer exposure of, say, a car driving down a road at 50 mph? You simply follow the car while the camera takes the exposure. Everything in the background appears blurry, but the car looks great (see picture). Now we can just do the same thing with asteroids, right? Wrong again. We don’t know where the asteroids are and we don’t know in which direction they’re moving.

This is where the JPL scientists came up with a great idea. Why not simply take hundreds of exposures of a single field, then slightly adjust each exposure a small amount to match it up perfectly with the motion of the asteroid? Everything in the background will obviously appear blurry, but we don’t care about that. Check my awesome paint skills, if that doesn’t make too much sense (hopefully the picture helps). Now, we still don’t know the correct way to stitch the images, but we can write a program that will do all of that work for us. If the computer throws together thousands of different image combinations, perhaps one will yield an extremely faint object.

Anyway, this is where I get to step up to bat. Obviously they need data to actually use their program, so I was asked to set up an observation process and make sure everything was up and running so once I leave after the summer, other students will be able to pick right up where I left off and continue taking data. I’m simply in charge of setting it all up and making it as easy as possible (because we all know freshmen aren’t the smartest of people). My job so far has been to set up a python script that will create a grid of coordinates, where the observer simply gives the script a location, exposure length, number of exposures, the grid pattern and the grid spacing, and the script spits out a file that simply has to be run in the ipython shell to begin observing and taking data. After that, I’ve been working on debugging some issues with the camera, which has been annoyingly problematic, and I’ve been working with finding ways to solve the guiding issues and make sure the telescope is staying focused on one field. That’s more of less what I’ve been up to so far. I’ll fill this out more when I get to observing and maybe add some of the observations. For now, I leave you with a copy of my code for creating the script and a paper that goes into far more detail on this process JPL published in March of 2014.

My code is here. This is technically the property of JPL, so don’t use it or anything without their consent.

The paper can be found here.