Astrophotography is all about capturing photons. In recent years there has been a huge growth in the number of astrophotographers mainly due to the availability of equipment and vastly improved sensors. This has greatly expanded the field of widefield astrophotography producing stunning panoramas of the Milky Way and some very innovative foreground effects.
There has also been a huge growth in the number of “experts” offering courses and advice on how to do astrophotography. We see this all the time from where we operate dark sky observing in Wairarapa and it was what motivated us to set up our own course rather than rely on the somewhat marginal quality of information and advice offered by a few others. Both of us are very experienced astrophotographers, Hari has years of experience in wide-field, has developed her skills throughout the technological changes that have come along and she published NZ’s first astrophotography magazine, Milky-Way.Kiwi, back in 2009. I have done the same in the deep sky and planetary astrophotography areas. Both of us have a technical and scientific background so have the theory behind the practical that enables us to get some great results. We use fairly modest equipment as we simply don’t have the budget for the high end gear but we also want to stay connected with the vast range of photographic devices around that the general public point up and try to photograph the night sky with.
The growth of awareness about dark skies has been hugely positive for the growth in astrophotography which, in turn, feeds back into increasing awareness of the importance of preserving dark skies. The purpose of this article is get into to some of the fundamentals of astrophotography, the stuff that you won’t get in the community Astronomy 101 type courses. Astrophotography is, fundamentally, pretty simple. The aim is to get faint light onto a sensor, record it and use it. Depending on how it is captured it can be used for aesthetic purposes or for science, or sometimes both. To discuss this topic we have to do a bit of science and a little bit of maths but hopefully it’s not too tricky. The sensors we use for astrophotography detect energy and that energy comes from the light that the sensor is pointing at. There might be a telescope or a lens between the sensor and the sky but the sensor doesn’t know that, it’s just after the energy.
A bit of light science
Photons are little fundamental particles of light, they travel at the speed of light and have energy that relates to the wavelength of the light. That statement might sound a bit confusing, how can a particle have a wavelength? This is the quantum weirdness of light in that it can behave both like a particle and a wave. We won’t worry so much about the wave aspects in this article as we’ll be talking a lot about photons. A camera has a sensor that is arranged in a grid, each little square in the grid is called a pixel and it’s like its own little sensor that measures the energy from the photons that hit it. This measurement ends up being a number and that number is what ends up being a picture, with all of the other pixels. Astrophotography is all about capturing photons on the pixel to get a useable number. It sounds simple but it does take a bit of work. So how do you get photons onto the pixel?
The Sun is the brightest source of photons that we have relatively handy to us on Earth, even when it’s cloudy it’s still bathing us in loads of photons. The Sun is enormous and the energy that hits the Earth works out to be around 1400 Watts for every square metre. Now time for a bit of maths, a Watt is a unit of power which is basically an amount of energy over a period of time. The amount of energy we talk about is a Joule (one Joule is how much energy you would use to push a one kilogram weight with the force of one Newton, which you could do with your little finger so one Joule is not much). The period of time is in seconds so one Watt is the same as saying one Joule per second. For the 1400 Watts hitting each square metre of the Earth’s surface then we could say that there are 1400 Joules of energy hitting each square metre every second.
How bright is our star, the Sun?
So how does that energy get transferred from the Sun to the Earth’s surface? That’s where photons come into it, they are the energy carriers (there’s other particles as well, but we’re just dealing with the photons). Each photon carries a little bit of energy, measured in Joules. It’s a tiny amount of energy that works out to be about 0.00000000000000000035 Joules for each photon (that’s 18 zeros after the decimal place). Since we know that number we can work out how many photons are hitting the Earth on each square metre from the Sun. That’s simply dividing the 1400 Watts number by the energy per photon and we end up with the massive number of 3,743,700,000,000,000,000,000 photons hitting each square metre of the Earth. The sensor in a camera is really tiny and the pixels in that sensor are even smaller. In one of my cameras, the size of each pixel is only 2.4 microns square. A micron is one millionth of a metre. So if I put my camera in the direct sunlight (which I wouldn’t do!) there would be 21,560,000,000 photons hitting each pixel (and that’s why you get sunburnt!). The Sun is the nearest star so it’s going to be incredibly bright. For astrophotography, we are generally going to be photographing stars lightyears away.
How bright is the next brightest star?
Sirius is the brightest star in the night sky and at 8.58 light years distant – it is reasonably close to us. It is about 22 times more luminous than the Sun – but being so much further away not much of that light makes it to Earth. If you recall that the Sun puts out about 1400 Watts per square metre on the Earth then we can compare this to Sirius which puts about 0.000000104 Watts per square metre on the Earth’s surface. We can work out how many photons that is landing on one pixel of my camera’s sensor, and, it’s just two photons per second, nearly 21 billion times less than what the Sun delivers! Of course to get a picture of Sirius we need to focus the image so we would put a lens between the sensor and the sky. That lens might be a telescope, and I normally use a 12″ reflector which effectively boosts the amount of photons getting to each pixel on the sensor by about 1300 times giving about 2500 photons per pixel every second for Sirius. That’s quite a strong signal so you can see if the exposure was only 100 milliseconds then there’d still be a very strong readable signal of 250 photons per pixel. The software that I use to control my camera also allows for binning, that’s where the adjacent pixels can be combined, usually 2 x 2 configuration which essentially turns four pixels into one. This gives the count of photons hitting each pixel from Sirius a boost up to around 10,000 per second.
How to capture a dim star hiding near a very bright star
This is great for a bright star, but it’s a different story for the bulk of the stars we take photos of, in that they are considerably less bright than Sirius (which we’ll call Sirius A from here). For example the dimmer companion of Sirius A is very tricky to photograph due to it’s very bright neighbour. Sirius B is a white dwarf, the remains of a once much larger star and a bright companion to Sirius A. What remains is this giant ball that’s the size of the Earth with the mass of the Sun squashed into it. Its surface temperature is about 25,000 K so it pumps out quite a bit of light, but hardly anything compared to the still functioning star Sirius A. Sirius B is about 10,000 times less bright than Sirius A. This means that there is about 1 photon hitting the pixel of my camera every four seconds from Sirius B. They don’t arrive on time every four seconds either, just to make things complicated, it’s an average of every four seconds. So to photograph Sirius B you have to ensure the sensor is not overwhelmed by Sirius A. This means using the shortest possible exposure to capture Sirius B.
You might be thinking, that a four second exposure will get at least one photon per exposure, which in theory will happen. The problem is my camera can’t count every individual photon, it only “sees” about half of them due to the sensitivity of the chip. To boost this we can bin the pixels 2 x 2, so squashing four pixels into one. That means that 1 photon per second per pixel should arrive at my sensor from Sirius B. About half of them end up creating an electrical response that can be measured so this will produce a useable signal. A one second exposure is still too long for Sirius A, as it is so bright, so we can drop the exposure down to half a second. That means we will get about 1 photon per pixel every second exposure, on average. This is not much so we have to figure out a way of getting more photons onto the pixel. We do this by taking lots and lots of exposures, we know that half of them won’t have any photons hitting them and that, of the other half, only half of them will be register a signal. So if we take 1000 exposures 25% of them will have a measurable signal. If we stack those exposures and take the average, the noise is reduced and the measurement signal is boosted a little for Sirius B.
This is exactly what I did to capture the image of Sirius B above. I took about 1000 pictures with my camera set at an exposure of 0.5 seconds through a 12″ telescope. The software binned the pixels to 2 x 2 and I stacked the result. The software for processing removes any slightly blurry photos and only stacks the sharp ones. When stacking this picture it was important to combine them using an average, rather other calculations such as sigma clipping, as the data with the signal from Sirius B has to be retained. I forgot to mention that the big deal with Sirius A being so bright is that it is very close to Sirius B. In my picture there’s only about 20 pixels between them so stray photons from Sirius A can overwhelm the signal from Sirius B very easily. So how do I know that barely perceptible tiny blob almost attached to Sirius A is Sirius B? Luckily these days we know exactly where Sirius B is so we can compare the exact position with the image. Of course there’s a strong possibility that the blob is not Sirius B and just an artefact of Sirius A, so I’ll need to do this a few more times to increase confidence – and maybe get a bigger telescope! and a better camera! and a faster computer to process it all!
If you want to join us for some astrophotography then come along to a star safari or contact us for our astrophotography course.