Advancements in Celestial Object Classification Using Enhanced Deep Learning Models

Researchers have proposed enhancements to the MargNet deep learning-based classifier to improve the classification of celestial objects. The study, authored by Srinadh Reddy Bhavanam, Sumohana S. Channappayya, P.K. Srijith, and Shantanu Desai, introduces attention mechanisms and Vision Transformer (ViT)-based models to MargNet. These enhancements aim to better distinguish between stars, quasars, and compact galaxies using photometric data from the SDSS DR16 dataset.

The integration of attention mechanisms allows the model to focus on relevant features and capture intricate patterns within images, improving the accuracy of classification. Vision Transformers, known for their performance in image classification tasks, are utilized to capture global dependencies and contextual information in complex astronomical images. The study uses a curated dataset of 240,000 compact and 150,000 faint objects, allowing the models to learn classification directly from the data with minimal human intervention.

Results indicate that the attention mechanism-augmented CNN in MargNet marginally outperforms the traditional MargNet and the ViT-based MargNet models. Additionally, the ViT-based hybrid model is highlighted as the most lightweight and easy-to-train model, achieving classification accuracy comparable to the best-performing attention-enhanced MargNet.

These advancements in astronomical source classification could significantly enhance our understanding of the universe by providing more accurate identification of celestial objects, aiding in various astrophysical studies and research.