Security mailing list and advisories (Martine, 30min?)
Summit 2023: current state of organization and proposals for changes in the agenda/format (Oleg, 10min)
2023.04 Release debrief (Kevin & Jose, 30 mins)
RTT issue on samd / stm32, what to do (Kaspar, 15min?)
Notes
VMA 2023.08 Moderation
Martine offers to do it :)
2023.10 Release Manager
Leandro: volunteers?
Martine: need to update tool (ben: https://xkcd.com/1987/)
And the tool says… Kevin or Kaspar
Kevin: I had so much fun with this one. If nothing comes up I can do it again
Martine: Bennet can help
Kevin: Sure!
Outlook on upcoming 2023.07 Release
Ben: No dates so far, will come up with something and post. It will be a nice release :)
Security mailing list and advisories
Martine: Severeal problems with the security workflow
Martine and Chrysn only readt to the mailing list
When migrated there was a problem to read the emails
Lack of time for so few people
Security advisories brought some confusion
Martine: do we want to keep sec. adv.? How do we continue? How do we merge fixess?
Maribu: In favor of least effort
Martine: Easier is is to use github forks but bypasses CI
Maribu: regular PR easier?
Martine: Yes!
Ben: if we fix something not reported do it normally in PR. The few bugs that are reported should not have special treatment
Jose: we had bugs that were not reported.
Kaspar: +1 for regular PR
Maribu: benefits of other workflows? Do we lose anything?
Martine: the ‘theater’ was not big of a deal, bugs were not critical (non important or complex to exploit). But it could happen that more critical bugs are reported like that. Maybe gives more exposure to serious bugs if discussed all in the open.
Maribu: I don’t think that doiung all this overhead actually helps. let’s fix ASAP and people can update. When we have issues reports people often have old versions of RIOT.
Jose: I think the same, we don’t need to open PRs “Fix security…”. There were issues that were fixed after reported. Would not make a huge difference to go over the whole overhead process.
Martine: Will provide PR to fix reports (normal PR)
Summit 2023: current state of organization and proposals for changes in the agenda/format
Oleg not here. Probably can discuss in Forum.
2023.04 Release debrief
Kevin: automating release, making it easier. Some topics are:
Jose: more info in the commits that one can extract later. eg major fix, feature, ci…etc. Kind of labels but directly from commits. semmantic commit approach.
Martine: adding this to the commmit msg?
Jose: yes. there are standards. We may benefit from existing tools on this. We can extract deprecation dates, breaking changes of APIs. Hard to do now. Sometimes we rely on maintainers knowing.
Kaspar: I tried conventional commits on some personal projects, they are a pain evrn for one. Think rewriting mosts commits, and many commits don’t match anything useful
Ben: there’s not always a clear cut between classifications. we have a limit of characters to the commit messages and this may add too much noise. Not so much value
Martine: same here, it is already hard to convince people to use our convention and this may make things harder and scare people.
Kevin: conventional commits is just an idea. we have a standard (file paths). We should become more independent from github labels. It could also be in the message body. conventional commits can track breaking changes. We have an issue with labeling bot. Sometimes many categories get set when changing something in PRs.
Ben: A/B issue? isn’t the issue that we have to generate the list of commits for the releases? mayube the problem is to have this list. do we need it?
Jose: exactly, we should get more useful info for users, the list is not useful.
Kevin: script that gets the info in the commits instead of PRs titles. flag in commit to be added to notes? Does anyone think that we should add more info in commits?
Martine: for release notes we can have more verbose notes, by adding all the text from the commits
Jose: we should add less. it is hard so summarize the key changes on the release. Not clear what breaking changes are and what is important.
Ben: Chatgpt?
Maribu: merge commits?
Kevin: contributers commits
Martine: merge commits = PRs
Maribu: If it is merge commits, it is one hoop less to jump throw for contributors as we maintainers write that message.
Kevin: we want to abstract from GH./ We can encode info in the merge commits. Flag the breaking changes. add this in release notes.
Martine: can we do this with labels? we already have them, let’s improve them. Why another system
Jose: labels you can remove them. we try to automate them. Labels sometimes are not exact
Martine: we have type labels. these need to be added by users
Ben: We can add more fine-grained labels
Martine: labels have the advantage of adding them at any time by maintainers
Jose: Can we enforce this? at least the main category, breaking change?
Martine: how?
Jose: can’t merge without label
Martine: when labels not set, someone needs to check
Kevin: check for at least one type of label
Martine: unify with colors and prefixes for same types
Jose: you miss the time of when things happen
Martine: in documentation we have deprecations and their dates
Jose: we still have features that should’ve been deprecated and still there
Martine: A new system would not fix this
Jose: Part of the release script, add to the release notes
Kevin: we can use labels, adapt the labels to our needs. Adapt labeler to require a category label in PRs. Regarding deprecations: sometimes we forget to remove things, we need to be better at maintaining this. Maybe script that checks for this?
Martine: this needs to be done after the release when the feature is deprecated. Sometimes it’s not so trivial to remove it (e.g. alternative functions) Maybe not so easy to automate.
Paramaterize boards in release specs test, general cleanup
Kevin: lots of release spects specify boards. We want to parameterize boards for the tests instead of saying which board should run the test. Anyone against?
Tumbleweed: Rolling by
Nope
Kevin: I’ll push this.
Presenting test artifacts
Kevin: we have compile and test for board script. we found that boards are failing. We want to add as an asset to the release the results of the secript. It says what the current state for all boards is.
Ben: the test script is a bit too simple / brute force, runs many unit tests that are completetly hardware-independent.
Ben: actual board tests require some
Ben: they are unit tests and simple. Manytimes there are connection issues and reported as test errors. For some tests we need HW connections and running only the script does not provide enough value. We need to improve tests and provide instructions on how to run tests that actually test hardware and boards (e.g. uart gpios,…)
Kevin: sometimes we think that some things are HW independent and affec no boards, but sometimes they still fail. Running through the whole suite is not so costly?
Ben: false positives negatives are the costs. You can’t tell if there are real issues hiding in the results due to false positives negatives
Kevin: running with incremental flags shouls help with that
Martine: false positives??
Kevin: false negatives! :) we should change our test to prevent false negatives. there’s going to be a spec that says how to setup the board
Maribu: that’s make test-with-config
Kevin: I agree it’s not always easy to find the bugs, but we should work torwards fixing the flaky tests and we should have more confidence in the boards and the tests that were run on those boards for the release cycle.
Maribu: when I run the script I find real bugs. But many times it takes 1 hour per board. around 4 hours for a full cycle. if would optimize we could get the same output in less time. why do we have so many failing tests? I don’t get this so often
Kevin: a lot of the false negatives come from the iotlab tests. on local setup (8 boards) had to run multiple times and there were repeateable bugs. I used docker and a toolchain issue is still an issue. I think it’s valuable. We should get it to the point where the valyue is less compared to the time it takes
Maribu: our scripts could be improved, sometimes it has the output of the test program and matches. It should check that the flashed app is the correct one. (flashing is working). we need to update the tests when we update the applications, otherwise it’s flaky
Kevin: worth doing? attaching the results of the tests to the release?
Ben: it makes sense to add selected tests to the release. having 20 tests that actually test the hardware are more valuable than 200 tests of packages.
Kevin: we need a flag “only hadware” for running these tests whith the script
Martine: the problem with the script defines no configuration, not which configuration
Maribu: arduino shield that provides whatever the test requires for a lot of tests (e.g. wiring)
Maribu: it would be super cool! Even just a connection eg between RX and TX for uart would help. Phillip would even be better because it would catch wrong UART communication parameters (e.g. symbol rate not as configured) that a “loopback wire” woudldn’t
Kevin: OK :D attaching results to release?
no strong opinions
Moving the automated release specs tests into main RIOT repo to encourage more testing
Jose: many of the release tests are integration tests that we can’t run everyday in the RIOT repo. Why have them separated?
Martine: they are more release specific. The weekly run was an experiment. But nothing speaks against moving the tests and specs to RIOT as a subdir.
Martine: do release managers feel responsible for deprecated features from other releases?
Leandro: current rmanager should be responsible
Kevin: I tried deprecating xtimer but it was complicated and takes time. You can;t ask the manager to do the removal but push to get it done
Martine: deprecation is always after the release. When I do the release I cleanup deprecations. It should be documented in the release manager wiki page
=> current release manager pushes to remove deprecated features and do cleanup of deprecated features
Kevin: the goal are continous releases where no one has to do anything
Release notes generation could be improved since things that should be, say pkg, are tagged as boards, requires lots of manual checking
Kevin it was an OK release!
RTT issue on samd / stm32,
Maribu: My highlevel understanding of the issue Kaspar wants to pick up is: HW sucks and we add sugar on top to make it work. for stm32 this will likely work, for samd HW is broken beyond repair. If we don’t use RTT we don’t have the time penalty but don’t have low power. For samd we can’t have realtime with low power. The best would be to fix the timer in samd with some magic. It will lock the system for some cycles when RTT is written, not even interrupts. Unacceptable long. What to do? Picking the RTT as default for all timer msec in platforms. Mark platforms where the RTT does not work?
Leandro: remove feature?
Maribu: Would prevent low power. Which now works (if not caring for real time)
Ben: we can’t full automate the standby mode. We use RTT when we go to sleep for a long time in our case. We would only use the standby mode when sleeping for long times (seconds or minutes). Then we also shut down external chips, put the radio to sleep. We don’t gain much by entering standby mode for short periods of time, like xtimer does. Maybe it does not pay off
Maribu: Other optimization potential: Often you don’t want to meet a timer exactly. Batching sleeps could reduce power consumption. sometimes our assuimptions don’t really hold.
Maribu (back at SAMD): Ideally we should also add a fix for the hardware.
Ben: you can, you just can’t do it automatically, you need to do manual check
Maribu: interrupts may happen when about to sleep
Ben: that’s a race condition
Maribu: it should be possible to fix this
Ben: they don’t use RTT for app timouts rather to go to sleep
Maribu: still a random thing can happen when about to sleep
Ben: you don’t care so much.
Kevin: policy flags “i want low power, I want realtime” if you want everything there is probably a manual config to do this. Let’s accept that for these chips
Maribu: now that we know this we can discuss and have an effort in this direction
2023.05 VMA
Date: 2023-05-09
Time: 14:00 CEST
Moderator: Leandro
Venue: https://meet.jit.si/RIOT-VMA-2023-05
VMA forum entry: https://forum.riot-os.org/t/virtual-maintainer-assembly-vma-2023-05/3923
Previous VMA pad: https://hackmd.io/y9l8MOKRT7y_a6CqlO_q3A
Previous VMA forum entry: https://forum.riot-os.org/t/virtual-maintainer-assembly-vma-2023-02/3859
Attendees (TBD)
Leandro
Martine
Emmanuel
Marian
Kevin
Jose
Teufelchen
Benpicco
Dylan
Kaspar
Koen
Note taking: Leandro
Agenda
VMA 2023.08 Moderation - 2min (Leandro)
2023.10 Release Manager - 4min (Leandro)
Outlook on upcoming 2023.07 Release - 5min (Ben)?
Security mailing list and advisories (Martine, 30min?)
Summit 2023: current state of organization and proposals for changes in the agenda/format (Oleg, 10min)
2023.04 Release debrief (Kevin & Jose, 30 mins)
RTT issue on samd / stm32, what to do (Kaspar, 15min?)
Notes
VMA 2023.08 Moderation
Martine offers to do it :)
2023.10 Release Manager
Leandro: volunteers?
Martine: need to update tool (ben: https://xkcd.com/1987/)
And the tool says… Kevin or Kaspar
Kevin: I had so much fun with this one. If nothing comes up I can do it again
Martine: Bennet can help
Kevin: Sure!
Outlook on upcoming 2023.07 Release
Ben: No dates so far, will come up with something and post. It will be a nice release :)
Security mailing list and advisories
Martine: Severeal problems with the security workflow
Martine and Chrysn only readt to the mailing list
When migrated there was a problem to read the emails
Lack of time for so few people
Security advisories brought some confusion
Martine: do we want to keep sec. adv.? How do we continue? How do we merge fixess?
Maribu: In favor of least effort
Martine: Easier is is to use github forks but bypasses CI
Maribu: regular PR easier?
Martine: Yes!
Ben: if we fix something not reported do it normally in PR. The few bugs that are reported should not have special treatment
Jose: we had bugs that were not reported.
Kaspar: +1 for regular PR
Maribu: benefits of other workflows? Do we lose anything?
Martine: the ‘theater’ was not big of a deal, bugs were not critical (non important or complex to exploit). But it could happen that more critical bugs are reported like that. Maybe gives more exposure to serious bugs if discussed all in the open.
Maribu: I don’t think that doiung all this overhead actually helps. let’s fix ASAP and people can update. When we have issues reports people often have old versions of RIOT.
Jose: I think the same, we don’t need to open PRs “Fix security…”. There were issues that were fixed after reported. Would not make a huge difference to go over the whole overhead process.
Martine: Will provide PR to fix reports (normal PR)
Summit 2023: current state of organization and proposals for changes in the agenda/format
Oleg not here. Probably can discuss in Forum.
2023.04 Release debrief
Kevin: automating release, making it easier. Some topics are:
Should we move to a better commit system (ie. conventional commits)
Martine: what do you mean?
Jose: more info in the commits that one can extract later. eg major fix, feature, ci…etc. Kind of labels but directly from commits. semmantic commit approach.
Martine: adding this to the commmit msg?
Jose: yes. there are standards. We may benefit from existing tools on this. We can extract deprecation dates, breaking changes of APIs. Hard to do now. Sometimes we rely on maintainers knowing.
Kaspar: I tried conventional commits on some personal projects, they are a pain evrn for one. Think rewriting mosts commits, and many commits don’t match anything useful
Ben: there’s not always a clear cut between classifications. we have a limit of characters to the commit messages and this may add too much noise. Not so much value
Martine: same here, it is already hard to convince people to use our convention and this may make things harder and scare people.
Kevin: conventional commits is just an idea. we have a standard (file paths). We should become more independent from github labels. It could also be in the message body. conventional commits can track breaking changes. We have an issue with labeling bot. Sometimes many categories get set when changing something in PRs.
Ben: A/B issue? isn’t the issue that we have to generate the list of commits for the releases? mayube the problem is to have this list. do we need it?
Jose: exactly, we should get more useful info for users, the list is not useful.
Kevin: script that gets the info in the commits instead of PRs titles. flag in commit to be added to notes? Does anyone think that we should add more info in commits?
Martine: for release notes we can have more verbose notes, by adding all the text from the commits
Jose: we should add less. it is hard so summarize the key changes on the release. Not clear what breaking changes are and what is important.
Ben: Chatgpt?
Maribu: merge commits?
Kevin: contributers commits
Martine: merge commits = PRs
Maribu: If it is merge commits, it is one hoop less to jump throw for contributors as we maintainers write that message.
Kevin: we want to abstract from GH./ We can encode info in the merge commits. Flag the breaking changes. add this in release notes.
Martine: can we do this with labels? we already have them, let’s improve them. Why another system
Jose: labels you can remove them. we try to automate them. Labels sometimes are not exact
Martine: we have type labels. these need to be added by users
Ben: We can add more fine-grained labels
Martine: labels have the advantage of adding them at any time by maintainers
Jose: Can we enforce this? at least the main category, breaking change?
Martine: how?
Jose: can’t merge without label
Martine: when labels not set, someone needs to check
Kevin: check for at least one type of label
Martine: unify with colors and prefixes for same types
Jose: you miss the time of when things happen
Martine: in documentation we have deprecations and their dates
Jose: we still have features that should’ve been deprecated and still there
Martine: A new system would not fix this
Jose: Part of the release script, add to the release notes
Kevin: we can use labels, adapt the labels to our needs. Adapt labeler to require a category label in PRs. Regarding deprecations: sometimes we forget to remove things, we need to be better at maintaining this. Maybe script that checks for this?
Martine: this needs to be done after the release when the feature is deprecated. Sometimes it’s not so trivial to remove it (e.g. alternative functions) Maybe not so easy to automate.
Paramaterize boards in release specs test, general cleanup
Kevin: lots of release spects specify boards. We want to parameterize boards for the tests instead of saying which board should run the test. Anyone against?
Tumbleweed: Rolling by
Nope
Kevin: I’ll push this.
Presenting test artifacts
Kevin: we have compile and test for board script. we found that boards are failing. We want to add as an asset to the release the results of the secript. It says what the current state for all boards is.
Ben: the test script is a bit too simple / brute force, runs many unit tests that are completetly hardware-independent.
Ben: actual board tests require some
Ben: they are unit tests and simple. Manytimes there are connection issues and reported as test errors. For some tests we need HW connections and running only the script does not provide enough value. We need to improve tests and provide instructions on how to run tests that actually test hardware and boards (e.g. uart gpios,…)
Kevin: sometimes we think that some things are HW independent and affec no boards, but sometimes they still fail. Running through the whole suite is not so costly?
Ben: false positives negatives are the costs. You can’t tell if there are real issues hiding in the results due to false positives negatives
Kevin: running with incremental flags shouls help with that
Martine: false positives??
Kevin: false negatives! :) we should change our test to prevent false negatives. there’s going to be a spec that says how to setup the board
Maribu: that’s make test-with-config
Kevin: I agree it’s not always easy to find the bugs, but we should work torwards fixing the flaky tests and we should have more confidence in the boards and the tests that were run on those boards for the release cycle.
Maribu: when I run the script I find real bugs. But many times it takes 1 hour per board. around 4 hours for a full cycle. if would optimize we could get the same output in less time. why do we have so many failing tests? I don’t get this so often
Kevin: a lot of the false negatives come from the iotlab tests. on local setup (8 boards) had to run multiple times and there were repeateable bugs. I used docker and a toolchain issue is still an issue. I think it’s valuable. We should get it to the point where the valyue is less compared to the time it takes
Maribu: our scripts could be improved, sometimes it has the output of the test program and matches. It should check that the flashed app is the correct one. (flashing is working). we need to update the tests when we update the applications, otherwise it’s flaky
Kevin: worth doing? attaching the results of the tests to the release?
Ben: it makes sense to add selected tests to the release. having 20 tests that actually test the hardware are more valuable than 200 tests of packages.
Kevin: we need a flag “only hadware” for running these tests whith the script
Martine: the problem with the script defines no configuration, not which configuration
Maribu: arduino shield that provides whatever the test requires for a lot of tests (e.g. wiring)
Kevin: phillip! meant for the peripheral tests
Maribu: it would be super cool! Even just a connection eg between RX and TX for uart would help. Phillip would even be better because it would catch wrong UART communication parameters (e.g. symbol rate not as configured) that a “loopback wire” woudldn’t
Kevin: OK :D attaching results to release?
no strong opinions
Moving the automated release specs tests into main RIOT repo to encourage more testing
Jose: many of the release tests are integration tests that we can’t run everyday in the RIOT repo. Why have them separated?
Martine: they are more release specific. The weekly run was an experiment. But nothing speaks against moving the tests and specs to RIOT as a subdir.
Jose: we can now run this in a weekly bases
Jose: can we move them to the RIOT repo?
Martine: yes (and we do already weeklies; also for test-on-iotlab and test-on-ryot)!
Martine: we can just move them like we dis with the apps repo. just a subtree in RIOT repo
Kevin: deprecation process
Leandro: let’s separate removing flanky tests and migration
Kevin: ok let’s move them
Martine: In last 4 weeks no failures at all!
we move them to RIOT
Better method of deprecation, more than reading through doxygen (we could just grep for deprecations commits if we had convetional commits)
Jose: how do we treat deprecations? there’s docs but we always fail to deprecate and remove features timely
Martine: scroll through the list manually: https://doc.riot-os.org/deprecated.html
Jose: hard to track
Martine: do release managers feel responsible for deprecated features from other releases?
Leandro: current rmanager should be responsible
Kevin: I tried deprecating xtimer but it was complicated and takes time. You can;t ask the manager to do the removal but push to get it done
Martine: deprecation is always after the release. When I do the release I cleanup deprecations. It should be documented in the release manager wiki page
=> current release manager pushes to remove deprecated features and do cleanup of deprecated features
Kevin: the goal are continous releases where no one has to do anything
Release notes generation could be improved since things that should be, say pkg, are tagged as boards, requires lots of manual checking
Kevin it was an OK release!
RTT issue on samd / stm32,
Maribu: My highlevel understanding of the issue Kaspar wants to pick up is: HW sucks and we add sugar on top to make it work. for stm32 this will likely work, for samd HW is broken beyond repair. If we don’t use RTT we don’t have the time penalty but don’t have low power. For samd we can’t have realtime with low power. The best would be to fix the timer in samd with some magic. It will lock the system for some cycles when RTT is written, not even interrupts. Unacceptable long. What to do? Picking the RTT as default for all timer msec in platforms. Mark platforms where the RTT does not work?
Leandro: remove feature?
Maribu: Would prevent low power. Which now works (if not caring for real time)
Ben: we can’t full automate the standby mode. We use RTT when we go to sleep for a long time in our case. We would only use the standby mode when sleeping for long times (seconds or minutes). Then we also shut down external chips, put the radio to sleep. We don’t gain much by entering standby mode for short periods of time, like xtimer does. Maybe it does not pay off
Maribu: Other optimization potential: Often you don’t want to meet a timer exactly. Batching sleeps could reduce power consumption. sometimes our assuimptions don’t really hold.
Maribu (back at SAMD): Ideally we should also add a fix for the hardware.
Ben: you can, you just can’t do it automatically, you need to do manual check
Maribu: interrupts may happen when about to sleep
Ben: that’s a race condition
Maribu: it should be possible to fix this
Ben: they don’t use RTT for app timouts rather to go to sleep
Maribu: still a random thing can happen when about to sleep
Ben: you don’t care so much.
Kevin: policy flags “i want low power, I want realtime” if you want everything there is probably a manual config to do this. Let’s accept that for these chips
Maribu: now that we know this we can discuss and have an effort in this direction