The DeFTAN (Dense Frequency-Time Attentive Network) series comprises advanced deep learning models for multichannel speech enhancement and separation, enhancing speech quality in noisy, reverberant environments by using spatial, frequency, and temporal information. Each model addresses specific challenges like real-time processing, array-agnostic enhancement, and universal source separation, applying state-of-the-art techniques for high performance. This work broadens Smart Sound System Lab.’s research to applications in multichannel sound source separation.