March 28 ~ 29, 2025, Virtual Conference
Clifton Reddy1, Anagha Reddy2, 1American National Insurance, League City, Texas, USA, 2Transamerica Insurance, Cedar Rapids, Iowa, USA
The growing use of client-side technologies for file downloads requires mechanisms to inform users about CO2 emissions, especially when downloading large files or streaming media. This paper proposes a method to validate file sizes using the HTTP HEAD method and warn users of carbon emissions before downloading. Client-side scripts send a HEAD request to retrieve the file size and region information, calculate emissions, and display a dialog to users, allowing them to decide whether to proceed. This approach reduces resource consumption and educates users about the environmental impact of their internet activities.
Carbon Emission, Content Download, Mobile Data, Sustainable Practices, Browser Functionality.
Bowen Su, Michigan State University, United States of America
Large-scale matrices arising in applications such as social networks and genomics often exhibit low-rank structures that traditional decomposition techniques, like Singular Value Decomposition, cannot efficiently handle due to their high computational cost. In this paper, we present a Scalable Binary CUR Low-Rank Approximation Algorithm designed to overcome these limitations by leveraging parallel processing and a novel blockwise adaptive cross algorithm. Our approach selects representative rows and columns through a binary parallel selection process, constructing a CUR decomposition that approximates the original matrix with significantly reduced complexity. Numerical experiments on Hilbert matrices and synthetic low-rank matrices demonstrate that our algorithm achieves near-optimal accuracy while offering substantial improvements in computational efficiency. Furthermore, scalability analysis indicates that the proposed method effectively utilizes multi-core architectures, paving the way for efficient processing of extremely large datasets.
Low-Rank Approximation; Multicore; Scalable algorithm.
Jianheng Li1 and Lirong Chen2, 1Department of Computer Engineering, Inner Mongolia University, Hohhot , China, 2School of Computer Science, Inner Mongolia University, Hohhot , China
Applying the Sentence-BERT model to the field of e-commerce products, presenting consumers with key information on fine-grained attributes of the products. This study will combine the fine-tuning of the Sentence-BERT word embedding model with the LDA model. Firstly, fine tune the Sentence-BERT model in the specific field of e-commerce, converting online comment text into a more semantically informative set of word vectors; Secondly, the vectorized word set is fed into the LDA model for topic feature extraction; Finally, focus on the key features of the product through keyword analysis under the theme. This study combines other word embedding models with LDA models and compares them with commonly used topic extraction methods. The results of this model improve the granularity and accuracy of topic segmentation, and achieve good topic consistency.
Sentence-BERT,LDA model,Topic extraction.
Yuxuan Cheng, Beijing Normal-Hong Kong Baptist University, China
Natural data collected from the real world often exhibit the scale imbalance problem. A large object can produce much more loss values than a small object, causing the detector to favor large objects more, even though small objects dominant the dataset. This inclination inside detectors results in the performance degradation of small objects. To alleviate this problem, this paper proposes a new patch-level collage fashion data augmentation technique and a new global scheduler based on existing dynamic scale training paradigm. Our new data augmentation can generate collage images with uniform object scales for better augmentation effects. Additionally, our new global scheduler can adjust the strength between different data augmentations to adapt to different stages of the training process. Experiments demonstrate the effectiveness of our techniques.
Scale Imbalance, Data Augmentation, Model Brittleness.