Forum Replies Created
-
AuthorPosts
-
kjd02002Moderator
Dear realai,
Please reference the response to team tsinghua_wcy in post #797 above.
- This reply was modified 4 years ago by samuel_admin.
kjd02002ModeratorDear team tsinghua_wcy,
Thank you for sharing some ideas to enable more robust code validation for the data challenge. The data challenge committee had considered these type of tools before releasing the data challenge this year, however, there were a few situations we wanted to avoid. For example:
1. Terms and Conditions (T&Cs) for Open Source applications and services typically allow general use licenses (including forking) for code that is shared publicly (reference: GitHub Terms of Service D.4. – D.7. which apply to CodaLab). This is often acceptable in academia, but not for many industry participants due to intellectual property policies. Note: This year’s data challenge was split 60% academia and 40% industry so this was a significant consideration.
2. Additionally, many available tools of this type assume models will be written exclusively in Python. While Python is predominant, it is not an exclusive method we’d like to impose for the data challenge.
- This reply was modified 4 years ago by samuel_admin.
kjd02002ModeratorHello team _______,
If you are interested in sharing code and discussing methods for this data challenge in more detail, then I encourage you to start the discussion in these forums. If there is sufficient interest, then the group may decide to work in other shared spaces (e.g. CodaLab).
If you are unable to send emails to posters@phmconference.org , then you may contact the 2019 posters chair directly at jamie@utk.edu
kjd02002ModeratorFinal results are still pending validation/review. However, here are preliminary results which will be posted to the website soon:
RANK | TEAM | SCORE
1-6 | Angler, KoreanAvengers, pyEstimate, Seoulg, SLUNG, ValyrianAluminumers | Not Yet Announced (alphabetic order)7 | trybest | 24.39
8 | tsinghua_wcy | 29.72
9 | HIAI | 31.61
10 | LDM | 34.97
11 | PHiMa | 43.46
12 | TeamBlue | 46.32
13 | HITF | 48.17
14 | JCHA | 52.54
15 | ChoochooTrain | 53.83
16 | NaN | 55.94
17 | realai | 58.95
18 | beat_real | 59.00
19 | 20Years | 72.56
20 | Runtime_Terror | 78.57
21 | CAPE_HM301 | 88.67
22 | RRML | 96.04
23 | SyRRA | 128.86
24 | 553 | 153.86
25 | ACES | 165.68
26 | Cracker | 189.35
27 | FMAKE | 192.39
28 | Xukunv | 194.88
29 | NTS01 | 199.43
30 | cranthena | 239.02
31 | _______ | 243.54
32 | Apostov | 253.86
33 | TeamKawakatsu | 282.46
34 | U-Q | 526.79
35 | ISNDE | 594.61
36 | UTJS-1 | 621.93
37 | ISX-MPO | 755.49
38 | UBC-Okanagan | 1140.69
39 | NukeGrads | 2175.56
40 | NUTN_DSG_TW | 2381.42
41 | GTC | 5513.57
42 | TWT | 6346.47
43 | dataking | 6346.47
44 | Arundites | 16571.80
45 | LIACS | 86762.72
46 | TPRML | 10^7
47 | Mizzou | 10^19
48 | DSBIGINNER333 | > 10^100- This reply was modified 4 years ago by samuel_admin.
kjd02002ModeratorDear tsinghua_wcy,
Thank you for your message. I appreciate that you’ve brought this to our attention in the interest of a fair competition.
Please note that preliminary winners of the data challenge must submit explanation of their models as well as journal papers before the committee approves final competition winners. Therefore, submitting predictions that yield the best score alone will not necessarily result in a team winning a top 3 position for the competition.
We have been working to validate the approaches of all potential winners over the past 2 weeks which is why final results have not yet been posted.
kjd02002ModeratorSorry for the delay posting the actual crack lengths for T7 and T8. These will be posted in a results page soon, but I am posting them here in the interim:
Cycle | Crack length (mm)
T7 36001 | 0
40167 | 0
44054 | 2.07
47022 | 3.14
49026 | 3.56
51030 | 4.13
53019 | 5.05
55031 | 7.22T8 40000 | 0
50000 | 0
70000 | 0
74883 | 1.94
76931 | 2.5
89237 | 3.71
92315 | 3.88
96475 | 4.61
98492 | 4.96
100774 | 5.52kjd02002ModeratorPreliminary winners have achieved one of the top three penalty scores. Runners up (just outside of the top three penalty scores) have been contacted in the event that preliminary winners are not able to fulfill the competition prize requirements or are deemed to have submitted results that are not based on sound modeling methodology or are otherwise inconsistent with fair competition. Results for preliminary winners and runners up will be reviewed in detail before final winners are announced on 18 August, 2019.
kjd02002ModeratorPreliminary winners and runners-up have been contacted.
As a reminder, all teams are invited to submit extended summaries based on your work for the 2019 PHM Data Challenge for consideration by the poster session committee: posters@phmconference.org
kjd02002ModeratorDear 20Years,
We will be sharing measured crack lengths for the validation data within the next few days.
Final team results (scores and rank) will be posted on 18 August, 2019. This is to allow time for the results and preliminary winner review process.
Please note: preliminary winners will be contacted soon.
kjd02002ModeratorWe will be sharing measured crack lengths for the validation data within the next few days.
Final team results (scores and rank) will be posted on 18 August, 2019. This is to allow time for the results and preliminary winner review process.
Please note: preliminary winners will be contacted soon.
kjd02002ModeratorDear UTJS-1,
The penalty score is fully defined in the “penalty score” tab of https://www.phmdata.org/2019datachallenge/
The T6 training data includes the actual crack lengths so you may use the penalty score definition to calculate all scores. The ScoringSpreadsheet was provided as a helpful guide, but is not required to calculate your own score. Please note, the ScoringSpreadsheet that is available for download contains an error with the monotonicity penalty function. The spreadsheet incorrectly uses m = 100 instead of m = 10. Please reference: https://www.phmdata.org/forums/topic/factor-m-in-monotonic-penalty-function/
kjd02002ModeratorThat is correct
kjd02002ModeratorThe data challenge committee has identified the source of this problem. The original T6 training data is erroneous and happens to include 3 sets of signals from T8 validation data (cycle numbers 40000, 50000, and 70000). Additionally, The T6 description file is not representative of either T6 or T8 crack length data. Note: T8 validation data is correct.
Our corrective action plan is as follows:
1) The correct T6 training data will be posted within the next 24 hours.
2) The result submission period will be extended from 21 July to 31 July, 2019 (11:59:59 pm PDT). This will allow teams an opportunity to update their models (if desired) based on the corrected T6 training data.
3) All teams will have access to the full T7 and T8 validation data (excluding crack length data). Note: many teams have already accessed the validation data.
4) All existing team result submissions will be cleared, and teams will have the opportunity to submit new predictions via an updated results submission website. The updated results submission website will be available no later than 19 July 2019 (11:59:59 pm PDT).
Please note, other relevant dates will be updated as follows:
– Preliminary Winners Announced = 4 August, 2019
– Winners Announced = 18 August, 2019kjd02002ModeratorPlease reference the following: https://www.phmdata.org/forums/topic/strange-pattern-in-activation-signal/
I believe the truncated waveform is the feature you are highlighting which is the same as described in the above post.
kjd02002ModeratorDear SLUNG,
Please reference my email from competition@phmconference.org (please check junk-mail to ensure receipt of this message).
-
AuthorPosts