Di fashion: utilizing diffusion models for personalized and high-fidelity generative outfit recommendations
-
https://doi.org/10.14419/p62jrc18
-
Abstract
Artificial intelligence is being used more and more by the fashion industry to boost client interaction and customization. The use of diffusion models, a subclass of generative models, in offering high-fidelity and customized clothing suggestions is examined in this research. Diffusion models gradually convert random noise into comprehensible visual outputs, making them ideal for producing intricate, detailed pictures. They are perfect for producing realistic and eye-catching costume ideas because of their capacity to capture complex materials, patterns, and clothing styles.
We analyze how these models perform better than conventional techniques like variational autoencoders (VAEs) and generative adversarial networks (GANs) in terms of visual quality, variety, and control over the generating process. The paper also emphasizes how diffusion models may be used to create personalized clothing alternatives that represent personal preferences by taking into account user preferences including body type, style, and previous fashion decisions. We also talk about how DiFashion systems may affect many areas of the fashion business, such virtual try-ons and eco-friendly clothes. The limitations and future prospects of using diffusion models for fashion recommendation systems are identified in the paper's conclusion, with special attention paid to scalability, user involvement, and computing needs.
-
References
- J. Huang, R. Feris, Q. Chen, and S. Yan, Cross-domain image retrieval with a dual attribute- aware ranking network,’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1062–1070. https://doi.org/10.1109/ICCV.2015.127.
- M. H. Kiapour, X. Han, S. Lazebnik, A. C. Berg, and T. L. Berg, Where to buy it: Matching street clothing photos in online shops,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 3343–3351. https://doi.org/10.1109/ICCV.2015.382.
- H. Chen, A. Gallagher, and B. Girod, Describing clothing by semantic attributes,’ in Proc. Eur. Conf. Comput. Vis., Berlin, Germany: Springer, 2012, pp. 609–623. https://doi.org/10.1007/978-3-642-33712-3_44.
- L. Bossard, ‘‘Apparel classification with style,’’ in Proc. Asian Conf. Comput. Vis., Berlin, Germany: Springer, 2012, pp. 321–335. https://doi.org/10.1007/978-3-642-37447-0_25.
- W.-H. Cheng, S. Song, C.-Y. Chen, S. Chusnul Hidayati, and J. Liu, Fashion meets computer vision: A survey,’ 2020, arXiv:2003.13988. [Online]. Available: http://arxiv.org/abs/2003.13988
- Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang, ‘‘DeepFashion: Powering robust clothes recognition and retrieval with rich annotations,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1096–1104. https://doi.org/10.1109/CVPR.2016.124.
- Z. Liu, Fashion landmark detection in the wild,’ in Proc. Eur. Conf. Comput. Vis., Cham, Switzerland: Springer, 2016, pp. 229–245. https://doi.org/10.1007/978-3-319-46475-6_15.
- S. Yan, Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang, ‘‘Unconstrained fashion landmark detection via hierarchical recurrent transformer networks,’’ in Proc. 25th ACM Int. Conf. Multimedia, Oct. 2017, pp. 172–180. https://doi.org/10.1145/3123266.3123276.
- J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders, ‘‘Selective search for object recognition,’’ Int. J. Comput. Vis., vol. 104, no. 2, pp. 154–171, Sep. 2013. https://doi.org/10.1007/s11263-013-0620-5.
- R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation,’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 580–587. https://doi.org/10.1109/CVPR.2014.81.
- L. Jiao, F. Zhang, F. Liu, S. Yang, L. Li, Z. Feng, and R. Qu, ‘‘A survey of deep learning-based object detection,’’ IEEE Access, vol. 7, pp. 128837–128868, 2019. https://doi.org/10.1109/ACCESS.2019.2939201.
- S. Ren, Faster R-CNN: Towards real-time object detection with region proposal networks,’ in Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 1–9.
- J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, ‘‘You only look once: Unified, real-time object detection,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 779– 788. https://doi.org/10.1109/CVPR.2016.91.
- M. Tan, R. Pang, and Q. V. Le, EfficientDet: Scalable and efficient object detection,’ in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 10781–10790. https://doi.org/10.1109/CVPR42600.2020.01079.
- S. Lee, S. Oh, C. Jung, and C. Kim, A global-local embedding module for fashion landmark detection,’ in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshop (ICCVW), Oct. 2019, pp. 3153– 3156 https://doi.org/10.1109/ICCVW.2019.00387.
-
Downloads
-
How to Cite
Deshmukh , K. ., Khade, A. . ., Pawar , V. ., Vyawahare, S. ., Arti Singh , M., & Priyanka Abhale , M. (2025). Di fashion: utilizing diffusion models for personalized and high-fidelity generative outfit recommendations. International Journal of Advanced Mathematical Sciences, 11(1), 44-48. https://doi.org/10.14419/p62jrc18