The task of image-based virtual try-on aims to transfer a target clothing item onto the corresponding region of a person, which is commonly tackled by fitting the item to the desired body part and fusing the warped item with the person. While an increasing number of studies have been conducted, the resolution of synthesized images is still limited to low (e.g., 256x192), which acts as critical limitations against satisfying online consumers. We argue that the limitation stems from several challenging aspects: the architectures used in existing methods have low performance in generating high-quality body parts and maintaining the texture sharpness of the clothes; as the resolution increases, the artifacts in the misaligned areas between warped clothes and the desired clothing regions become noticeable in final results. To address these challenges, we propose a novel virtual try-on method called VITON-HD that successfully synthesizes 1024x768 try-on images. Specifically, we first prepare the segmentation map that guides our virtual try-on synthesis, and then roughly fit the clothing item to a given person’s body. Next, we propose ALIgnment-Aware Segment (ALIAS) normalization and ALIAS generator to handle the misaligned areas and preserve the details of 1024x768 inputs. Through rigorous comparison with existing methods, we demonstrate that VITON-HD highly surpasses the baselines in terms of synthesized image quality both qualitatively and quantitatively.