On Versioning... and scrapli

Versioning, it turns out, is hard. Well… not really, putting a version number on a release is super easy, but having that version be meaningful to users is an entirely different story. I’m not going to beat a dead horse about the types of versioning or what the purported benefits of each type are, there are plenty of posts on the Internet about just that. Instead, I’m going to talk about my uh… “relationship”(?) with versioning, specifically as it pertains to scrapli.

From scrapli’s first release I have used calver versioning. I picked calver because, to me, it just makes sense – you should always pin requirements, and semver gives me no real insight into… well… anything about the version of a package. Obviously, you can (and should!) be checking release notes and then semver starts to make a whole bunch more sense, but my initial thought was to just use calver so that its always clear at least how “new” a release of scrapli was. To me, that “newness” is an immediately obvious indicator if the project is being maintained – is the latest release from 1983, well… maybe that project hasn’t had much love in a while and I should steer clear. Conversely, a few releases happening within the last few months or so, OK, cool, this project is getting updates and such.

The obvious issue with calver, especially in the context of scrapli, is that there is no way, other than release notes/changelog, to indicate if/when breaking changes occur. Of course, this is why folks are proponents of semver… but… then there is the question of what is a breaking change? Will we forever be on version X? X.1.1, X.1.2, X.1.3, X.2.1… etc. etc. and on and on…

In the end, I suspect that whatever versioning scheme used, when there is a breaking change, some (many? most?) users will be caught off guard and be a little irked. So I ended up rolling with calver because its easier on the eyes and makes sense and I think people need to be pinning versions and doing some due diligence on their end anyway…. and its my project so I get to pick!

All that said, I still believe there are issues with calver, or more accurately how I have been using calver. I’ve pretty much just cut releases whenever I think there is enough of an update to warrant a release… and thats kind of a super semver way to do things isn’t it! Or put another way; my releases of scrapli have been meaningless… exactly the thing I wanted to avoid! Oh no!

To combat my own stupidity and the creation of meaningless releases… I have a plan! scrapli will now move to a semi-yearly (every 6 months) release cycle! Not just scrapli, but all the scrapli related things. Here is the rationale:

  1. All scrapli projects will now be in “sync” – this makes my life a lot easier, and it hopefully makes everyone’s life a lot easier too. There will almost certainly (like the latest releases…) be times where a change in scrapli “core” needs to be dealt with in the other scrapli libraries – now at least it will be very easy to make sure things are in lockstep.

  2. Semi-yearly releases give the release cycle meaning, making it easier for folks to understand when they should be checking changelogs and such.

  3. It makes my life easier/saner – my plan is to basically just release pre-release versions of the next release whenever I would historically have been making new releases… this way the new stuff is out there and testable and pip installable for the other scrapli libraries. Then when the new release time is ready the final version can be cut. If there are no changes, thats OK, we’ll make a new release anyway so that we keep all the scrapli libraries in lock-step.

So… end of January and end of June are the target release times… I think and hope that this will be a more mature way to handle releases, that hopefully will be nicer for users of scrapli. As always, pin your releases, check release notes, and subscribe to releases on GitHub so you are always in the loop for projects you rely on!