Boosting recommendation performance by user intention aware visual feature pre-training
en-GBde-DEes-ESfr-FR

Boosting recommendation performance by user intention aware visual feature pre-training

14/01/2026 Frontiers Journals

With the advancement of multimedia internet, the impact of visual characteristics on the decision of users to click or not within the online retail industry is increasingly significant. Thus, incorporating visual features is a promising direction for further performance improvements in click-through rate. However, experiments on our production system revealed that simply injecting the image embeddings trained with established pre-training methods only has marginal improvements.

To solve the problems, a research team led by De-Chuan Zhan published their new research on 15 July 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team analyzed that the main advantage of existing image feature pre-training methods lies in their effectiveness for cross-modal predictions. However, this differs significantly from the task of CTR prediction in recommendation systems. In recommendation systems, other modalities of information (such as text) can be directly used as features in downstream models. Even if the performance of cross-modal prediction tasks is excellent, it is challenging to provide significant information gain for the downstream models. They argued that a visual feature pre-training method tailored for recommendation is necessary for further improvements beyond existing modality features. To this end, they proposed an effective user intention reconstruction module to mine visual features related to user interests from behavior histories, which constructs a many-to-one correspondence. They further propose a contrastive training method to learn the user intentions and prevent the collapse of embedding vectors. They conduct extensive experimental evaluations on public datasets and our production system to verify that our method can learn users’ visual interests. The proposed method achieves 0.46% improvement in offline AUC and 0.88% improvement in Taobao GMV (Cross Merchandise Volume) with p-value < 0.01, which is significant considering the large active user volume of the Taobao App.
DOI: 10.1007/s11704-024-3939-x
Fichiers joints
  • Fig.1 User intention reconstruction module
14/01/2026 Frontiers Journals
Regions: Asia, China
Keywords: Applied science, Computing

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement