AGN Reverberation Mapping

On the subject of ensemble photometry, the question is how does it differ from aperture photometry?   Looking at the XVista package we find the PHOT program which given the centroid coordinates of the  "stellar" object in your image, it calculates the flux based on an aperture.   The value of the flux is calculated by adding the pixel counts within the aperture including fractional pixels (weight adjusted).  This program is akin to DAOPHOT PHOT program and the uncertainties calculated for this flux are based on poisson statistics (shot noise, read noise, background noise, etc).  The linear flux is converted to a magnitude with a zero point value that depends on the sky background intensity (mag 25 by default ).    The result from PHOT is therefore what I consider to be aperture photometry.

For each image, phot is run for every stellar object in the image.  The results recorded on a file per image.  These files are then run by an aggregator program from the Ensemble package, multipht.   Multipht gathers the aperture photometry value of each star in each image and puts it in a format ready for the final step, the ensemble solution.  What does the ensemble solution entails?  It corrects for the zero point offsets of each individual image, since for obvious reasons, ground observation sky intensities varies from night to night.   It goes beyond zero-point corrections, it assumes that all stars are constant.  Therefore it attempts to minimize the issues we get from night to night on the ground.  That is transparency issues.   That includes extinction, clouds, the Moon, exposure times, fov pointing differences (more or less stars in the images).  For details on this algorithm please see Kent Honeycutt paper in inhomogenous ensemble photometry and in particular Dr. Michael Richmond's ensemble documentation online.

Now lets look at the AGN MRK885.  The field of MRK885 as taken by the Liverpool Telescope and the stars for this example are labeled on the image of the FOV above.  I have just highlighted the stars that I will be presenting in the following plots.  Lets begin by showing the the solution from Pasquale's pipeline vs.  the ensemble solution I calculated.  The ensemble solution shows a downward trend.   The question is why this trend?

Ok, so lets start with just the solution from the PHOT program. Straight aperture photometry with arbitrary zero-point ( sky intensity on the image ).  Our AGN is the green curve, the
others are comparison stars in the ensemble ( not all of them ).   As you can see from this plot.  All targets exhibit (basically) the same variation.  The curves are dominated by the zero-point offset.  These offsets will be removed on the next step of the pipeline (ensemble).  The next plot shows the ensemble zero point offset corrections.
Now lets see how the same stars look after ensemble ran. The next plot shows the corrected ensemble stars as labeled in the first image.  Star 0  and 1 look flat.  Star 9 has some variation.  MRK885 is in green and at this zoom level you cant see the downward trend (it is very small sub magnitude).

Lets first zoom in to the first 3 stars on the plot to get a feeling of the sub magnitude variations.  The zoom in plot shows milli magnitude variations for star 9 (green curve).  Notice that star 1 is the brightest star in the ensemble.

Now lets see how MRK885 looks zoomed in.   The change is quite small, about 0.1 magnitude trend downwards as seen previously in the comparison plot.

More importantly is to assess the standard deviation of each star in the ensemble.  For that purpose I present the last plot.  This plot shows the standard deviation of the light curve as a function of instrumental magnitude.   As expected fainter objects have a greater deviation than brighter ones.   I have pointed out which one is the AGN in relationship with the other ensemble objects.


Ok so what is the "optimal solution" that we obtain from the ensemble package?  The following I have shamelessly taken from From Dr. Michael Richmond's website. First we make the following assumptions:


  1. most of the stars are constant, with some well defined true magnitude. Let us denote the true magnitude of star i by the symbol M(i)
  2. each image may have some zero-point offset in its magnitude value, but there are no more complicated systematic errors.
 Let us use e(j) to stand for the zero-point offset in image j.If image 2 has zero-point offset 0.05, that means that all the stellar magnitude values in the image are too large by 0.05 mag.

We have a large set of actual measurements m(i,j) of a number of stars in a number of images. Assuming that our model of the system is accurate, the error in each measurement must be


error(i,j) = m(i,j)  -  [ M(i) - e(j) ] 
Our task is to find the image offsets e(j) and true magnitudes M(i) which minimize the remaining errors. The program uses standard least-squares techniques to find these parameters, then applies them to correct all measurements to the ensemble solution.


After finding the least-squares solution for zero-point values, the program judges them by calculating weighted sums for each image: given i=1..N stars in the image, it calculates two measures the degree to which all the stars in an image can be brought to match their true magnitudes.

    let    z  =  corrected_mag(i,j)  -  true_mag(i) 

           w  =  weight given to m(i,j) based on input mag uncertainty



                  [  sum  (z*z*w)  ]
          z1  =   [  ------------  ]
                  [   sum  (N*w)   ]


                         z1
          z2  =      --------------
                       sqrt (N)              

The smaller the values of z1 and z2, the more closely the (corrected) measurements on this image match the overall ensemble measurements.