Our competition will furnish participants to devise solutions for predicting potential failures of individual DRAM modules within a subsequent observation period. The final solution is expected to run and bring values to real-world production environment.
The competition is open to everyone interested in advancing the memory failure prediction for reliability. Individuals and teams from academic, industry, and independent backgrounds are welcome to contribute their expertise.
The competition comprises two stages: the initial stage features an AB List setup, which includes training data tailored for two diverse memory models. Subsequently, in the second stage, a fresh dataset encompassing mixed models (more than two) will be introduced to encourage solutions with few-shot learning capabilities and knowledge transfer ability. Overall, the competition’s appeal lies in its practical relevance, the accessible entry point of the initial stage, and the fresh challenges presented in both stages.
Registration details and deadlines will be provided on the official ML for Systems Workshop Memory Failure Prediction website. Participants can register for either or both tracks at any time during the competition period.
Participants are encouraged to have a background in machine learning, memory, or related fields. However, the challenge is designed to accommodate a range of skills and knowledge levels. Familiarity with the failure prediction will be beneficial.
The competition will focus on system error log data memory, with specific Correctable Error (CE) and Uncorrectable Error (UE). These datasets include memory failure tickets and error logs dataset, in the real-world datacenters.
For more information on the datasets, please refer to the Getting Started page.
Participants must submit their code, models (if developed), and a short paper describing their approach. Details on submission formats and platforms is available on the Getting Started page.
Yes, teams of any size are allowed, including solo participants. Collaboration is encouraged to leverage diverse skills and perspectives.
During the validation phase, each team is limited to 5 submissions per day for each track. In the test phase, teams are restricted to a total of 5 submissions. Only one account per team is permitted for submissions to ensure fairness.
To be eligible for prizes, winning teams must share their methods, code, and models with the organizers. Sharing with the broader community is encouraged to foster knowledge exchange and innovation.
Submissions will be assessed based on attack accuracy, attack efficiency, and defense effectiveness. These criteria are designed to measure the practical and theoretical impacts of the proposed privacy-preserving strategies.
For more information on evaluation metrics, please refer to the Prizes & Tracks page.
Prizes will include cash awards and best paper awards. Awards will be given for the first, second, and third place in each track, as well as special awards for cost-effective and high-performing methods against the top-3 submissions from the opposite track.
For more information on prizes, please refer to the Prizes & Tracks page.
For any inquiries, participants can reach out to the organizers via email: zhoumin27@huawei.com.
Key dates, including registration deadlines, submission deadlines, and prize announcement dates, will be displayed on the website and through official communications to registered participants.
For more informations, please refer to our website: https://hyxie2023.github.io/SmartMem.github.io/.