Coronal Hole Model Validation with Synchronic Maps

Authors: Andrew Leisner (George Mason University), Jie Zhang (George Mason University), Nick Arge (Goddard Space Flight Center), Scarlett Adams (Harvard University), Angela Jin (Phillips Academy Andover), Omar Aljebrin (George Mason University)

Identifying coronal holes in solar disk images is very challenging, yet critical, as they serve as a key constraint for coronal models. In this poster, we discuss a process to create synchronic coronal hole maps, with coronal holes identified by manual means in a set of STEREO EUVI/A, EUVI/B, and SDO/AIA disk images from April through August of 2012. For the manually identified coronal holes, first an EUV global synchronic map was made using a synchronic map algorithm. Then, a labeling software was used to carefully outline the boundaries of coronal holes identified by eye, resulting in a ground truth coronal hole map. Four different labelers created their own set of labels for the same dataset of synchronic maps, these four sets were then averaged together to form a composite manual coronal hole map, which shows the uncertainty in the position of the coronal holes. Next, the manual maps were directly compared against sets of WSA model coronal hole predictions, where multiple types of photospheric magnetic field maps were used as input, these magnetograms include data from ADAPT VSM, ADAPT GONG and standard GONG. The modeled coronal holes from the ADAPT runs were composited in a similar fashion to the manual set. The comparison was done quantitatively using several metrics on a global scale. These metrics include a comparison of the total open area, the Jaccard Index, the true skill statistic, and the fractional skill score. The fractional skill score is determined by defining dimensions of a given box size, in this case 20° by 20°. The maps are divided into boxes of the given dimensions, and the fraction of each box that is full is computed. The root mean squared error (rMSE) between the fractional sums over all corresponding manual and modeled map boxes is then calculated. Finally, the rMSE is calculated between a given reference map and the manual map, which is compared against the rMSE of the modeled map to give the skill score. The scores based on several different reference maps were computed. We find across all metrics, ADAPT GONG performs the worst, while ADAPT VSM performs the best with standard GONG close behind. The results of the effect of choice of reference map on the final skill score is still forthcoming.