Modernization Hub

Modernization and Improvement
CppCon 2018: “C++ Dependency Management: from Package Consumption to Project Development”

CppCon 2018: “C++ Dependency Management: from Package Consumption to Project Development”


– Morning, welcome. So today I’d like to talk about
C++ dependency management. Here’s some statistics to get us started. So this is the number of packages in package repositories
of a couple of popular programming languages. As you can see when it comes to C++, before we can answer the question how many there are, we
first need to decide which package manager
we’re gonna talk about. As always in C++, we have
several alternatives. I was told that the largest
of them is apparently vcpkg, and they have about 800 packages. The second number in the Rust row is how many packages
they had two months ago. So just think about it. In the last two months, Rust added almost three times as many packages as the largest package
repository in C++ has in total. And if this doesn’t tell
you that we have a problem, then I don’t know what else will. So what exactly is our problem? Well, we have many, right? We don’t have extended build system, but that’s for another talk on Wednesday. So we actually have a
couple of package managers now in C++. But the problem is that they are what I would call
consumption managers, right? They can download, build,
and install your package and its dependencies, but
they don’t concern themselves with how those packages actually develop. And if you look inside, how
the sausage is actually made, then the answer’s usually painfully and really by mere mortals. And that’s I think the fundamental problem that prevents us from getting
to thousands of packages in our repositories. We need to handle the
entire project development life cycle, starting from creation, development, testing, dependency
management, of course, the central part, and as well as delivery, publishing packages to places where they can be easily
accessed by the users. In other words, we need to turn every C++ developer into potential library fodder. So today I would like to show you, and there’s no fundamental reason this should suck in C++, right? There’s no reason we cannot do it. So today I would like to show you how we do it in build2, which is a build toolchain for C++. So if you look at the language like Rust, they have a single tool. They have a model that’s called
a build systemless model, where you just interact
with this package manager slash build system. I don’t think this is going to scale to the C++, and it doesn’t actually scale to more complex Rust projects, as well, as they’re finding out. So instead, build2 is
actually hierarchy of tools. So at the bottom, we
have the build system. In the middle, we have
the package manager, which is the consumption manager. Sometimes you just need
to build a package. You don’t actually plan to develop it. And then on top, we have
the project dependency management tool, which
is what you’re gonna use to develop your projects. And this build2 toolchain is complemented nicely by your version control system. So together, they will constitute the core of your development tool set. Okay, so gonna see how it works in build2. We need an example. Who wants to write another
hello world example? Not very enthusiastic. How bout we write something real? How bout we actually create a real library that most of us might actually want to use at the end of this talk? How bout a portable dependency-free UUID generation library? Currently the only viable
option is Boost UID, which is a good option if you already have a dependency on Boost. But to bring in this dependency just to generate UIDs, sounds
a bit like an overkill. So that’s what we’re gonna try to do. But before we will actually dive in, let’s step back a bit and talk briefly how exactly a library’s developed. What actually makes
someone create a library? Well, one possibility is
I woke up this morning, I realized, hey, I’m gonna create this great new library that’s
gonna become popular in the morning, and I’ll become famous, give CppCon talks. I think most of us by now have developed a resistance to such marketing ploys. So instead, we would much prefer to use a library that was born out of real need. You know, someone had a
piece of functionality in their project, and
they wanted to use it in another project, so they quickly factored it into a library. Or I published my application,
my program on GitHub, and someone saw it and said, hey, this thing it does, nifty. I also want to do it in my application. Can you put it in a library? So and these are some of the properties or some of the guidelines I use when I work on such organic libraries. They are normally, as I said, born out of a real need. They’re usually small libraries
doing a specific thing, hopefully well. I also try to resist
overengineering things when I move a piece of
code into a library, ’cause now you’re starting to base it on some theoretical thoughts, and I also like to have a test, at least one in my library. All right, so in the
spirit of this organic library development,
we’re actually gonna start with a program and an executable, and then we’ll see how we can, how easily we can factor this
UID generation functionality into a library. So the easiest way, everyone can see, right? We’re all good? The easiest way to create
a project in build2 is using bdep new command. So here we specify that the
project type is executable, and we choose C++. And we’re call it genuuid. So this is actually gonna
be a fairly useful utility on its own, ’cause there are usually utilities available on different platforms, but sometimes they’re not installed by default, the format is different. Okay, so we’ve created the… So t is my alias for tree. If we look inside, what
we get is a hello world, a project which we can
customize for our needs. There’s also some build and
packaging infrastructure set up, and I’m not gonna spend half an hour explaining all of it right now. We’ll just, as we need bits and pieces, we will talk about them. Now, this is also a good time to talk about build systems briefly. If you look at Rust, for example, they have what I would call a
build systemless model, right? You put things in certain place, name it a certain way, and
then you’ll get an executable or a library. I don’t believe this is gonna
scale to C++ requirements, and we’ll see a good example
of that in a few minutes. But what we can actually,
kind of even by accident we achieve it in build2,
for simple projects, you can actually approximate
this build systemless flow. If you put your source files
in this project subdirectory, in the source subdirectory,
you can add new source files, you can rename them, you can delete them, and the build system will
automatically pick it up. You actually don’t need
to touch the build file when you’re just adding
or removing source files. So I think this is kind
of a nice middle ground. For simple things, you
rarely need to touch the build system. But if you need it, it’s actually there. And we will see now quite
often you do need it. Okay, so there is our source file. We can already see it. But before we’re gonna go hack on it, let’s first set up a build infrastructure. So in build2, we build
normally out of source. In directory it’s called
build integrations. And so we need to initialize our project and a couple of build configurations. Normally in C++ we have several at least different compilers, options and so on. We’re gonna use the init command. So, with init, you can
initialize your project in an existing build configuration. But we haven’t created any. So we’re just gonna use its create mode. So here we give it a name, gcc, and we are using the C++ compiler. Yeah. Just need to go inside. So yeah, this is actually
a good point to make. The workflow of bdep is made to match the workflow you’d normally, or yeah, the development workflow that is dictated by your version control system. So normally you do things
from your source directory of your project, so build2 kind of doesn’t try to change the way you do it. Okay, so here’s one for GCC. Let’s do also for Clang. I have Clang on my machine. Let’s also do, for interest’s sake, to keep things interesting,
let’s also do a MinGW. So MinGW’s a cross compiler. It’s a GCC for Windows, all right? So now if we look at the parent directory, so next to our project, we
have the three directories appropriately named that
contain our build configuration. Another way to check what we have, which build configurations we have with bdep config list command. So there are those three again. There are their names listed. You might notice that the
first build configuration is a little bit different,
has a little bit different properties. So in build2, one build configuration can be designated as the default. And this has a couple of nice properties. If you run the build system
in the source directory, then it will automatically build in this default build configuration, so you don’t actually need
to specify the directory explicitly or change the directory. And also some interesting targets, some for example
executables, documentation, test results, they will
automatically be backlinked into your source directory. So you can, for example, easily run them. So let’s see how that all works. So there we run the build system, directory now source, and if we look at the listing, there’s our executable, which is actually a symbolic link to the executable in the default build configuration. We can then run it. As promised, it’s a hello world example. It’s also a built in for
other build configurations that we’ve set up. We can do it with a build system directly. We’ll just have to
specify the long directory path explicitly. A more convenient way to do it is with bdep update command. We can use the Clang. We can use names that we gave
our build configurations, for example Clang, or we can say update all build configurations. Quite a handy option. So there the GCC was already up to date, and the other two were
updated successfully. Okay, so there is our
build infrastructure setup. Now we are ready to start changing our hello example to
actually generate UIDs. Again, there’s our listing. So we see the source file. So without much thinking, I just go and start messing with that. So there’s a hello world example. Font is a bit small, right? We cannot see, correct? Okay, so the hello example, as promised. So I’m gonna replace it
with my UID generator. As you can see, I type very fast today. I’m not gonna go into too much detail. There’s the type, UID that
has a generic function and has a string function,
which is what I use in mine. And then at the bottom
there, the core remove there. That’s how we actually do
it on different platforms, and so on. Okay, so let’s go try to build it on our build system. So it compiles successfully, but then the linking fails. So it turns out that the system function that we actually use on
Linux to generate UIDs, it requires us to link a separate library. This is a good example
where build systemless model only takes us so far. And we actually do need the build system. You need to link this library
only on this platform. So we have to go look at this build file. Okay, it’s short, a little bit cryptic. I’m not gonna go into
detail on this today. There is nice documentation
if you are interested. So I’m just gonna link the library. So there we link it
for Linux, and I’m also anticipating some issues on Windows. I’ll fix that, as well. Okay. Try to run the build,
okay, so that worked. Even generates us some UIDs. And they’re even unique. Let’s try to run tests. That didn’t work out very well. Well, if you look at the diagnostic, it kinda makes sense, right? It was originally a hello world example. Now it generates UIDs. So let me also fix that. There are two types of tests in build2, the simple test, make an executable, and if it runs successfully it assumes the tests have passed. There are also scripted tests, where you can analyze input supply, input, analyze output with regular
expressions and so on. And they also run in parallel quite nicely from the same file. So again, there’s a nice
documentation for this. I’m just gonna replace it. So there are two tests. First checks that the format is correct using a regular expression,
and the second one generates two UIDs and makes
sure that they are different. So just basic smoke testing,
but good enough for our needs. Let’s try the test. So the test succeeded,
just got a warning that the directory had some stuff in it from the previous failed build. To test other build configurations, we can use the similar to update, there is the test command. We can just do test all and test in all the different configurations. Even the Windows test succeeded, so if those of you are wondering, this test is running
on the Wine emulation. Okay, I think our little
tool is in a good shape. Maybe we should commit
it and push to GitHub. Before we do that, though, let’s briefly talk about- (man talking quietly) You’re referring to the
test warning, right? Well, it does that. It just warns you that
there was some stuff there for safety reasons. Okay, so versions, right? So before we actually publish anything, let’s briefly talk about versioning. So versions are a
signaling mechanism, right? They tell our users what kind of changes they can expect to find in each release. I’m sure most of you are
familiar with versioning, in the development world,
pretty much standard on semantic versioning. Yes, it’s not perfect. Yes, it has issues, but it seems to be the tool that does the
job most of the time. So the different ways to map it for C++, we have binary compatibility, we have source compatibility. The mapping that I prefer in the command is to reserve patch versions
for binary-compatible changes. Mostly it’s specific bug fixes. Use minor versions for
source-compatible changes, and if you’re not sure what exactly the effect of your change is, just be safe and increment MAJOR. Hopefully there will be
tools that will actually suggest or check for us which
component we would implement. So if anyone wants to work on such a tool, I would gladly use it. Okay, so it’s easy so far. I’m sure nothing, I
haven’t told anything new to most of you. So there are two releases, right? What happens in between? What if we want to publish
an alpha or beta release for our users to test? Well, it’s actually handled
by semantic versioning, so there can be a prerelease component. So there we have our first
alpha release, right? Okay, what happens in between? What if we want to publish
our version to a CI server or some of our users wants to test a specific commit? For example, we implemented
the feature for them and said, hey, test it for me. How do we communicate? We can specify commit ID, but
it’s not exactly the same. Now, if you don’t change a version for each commit, you might
have the same version that signifies multiple
states of your project. And that’s where it
actually can be painful. So in build2 what we have is
called continuous versioning. So in build2, every commit of your project is assigned a unique,
properly ordered version. So this is what it looks like. So for releases, prereleases,
it’s nothing new here. So those you manage yourself. But in between, for unreleased commits, the build2 in cooperation
with your version control system automatically assigns
you a snapshot version. So this is what it looks
like for Git, for example. So it’s a commit timestamp plus an abbreviated commit ID. Let’s actually see how it works in reality versus our theory. So if we, another handy command is status. So if we run bdep status, it tells us what is the
status of our project in a build configuration. In this case, it’s the
default build configuration because we haven’t specified one. So this version doesn’t look exactly as what I show on this slide. And the reason for that is because we don’t have any commits, right? So if you don’t have any commits, you get this special Unix Epoch version. So let’s go ahead and commit. Make a commit and run status again. Okay, so the configured
version is still the same, but now we have a new available version. So this is the example
of where the build system noticed, okay, it’s clearly
the version control system you keep in this case, saw
that there’s a new commit, and it does sign the new
version for us automatically. So this is important. I believe for this to work practically, it has to be automatic. You cannot be going and modifying your version file every
time you make a commit. Nobody’s gonna do that. Okay, so if we run the build system now, have to update our project. But the last line says that
everything is up to date, which makes sense. We haven’t really changed anything. The first two lines are actually what I want to draw your attention to. What we have here is an
automatic synchronization on build system invocation. So in build2, if metadata of a project, for example, its version or
set of dependencies changed, then every time you
invoke the build system, these changes are detected
and automatically synchronized with the build configuration
where you are building. So this might not sound very interesting. After all, it has nothing changed. But we’ll see in a moment where it does make quite a big difference. Okay, so we’ve made a commit. I’m gonna go and create a repository on GitHub
in my personal account. I’m sure most of you’ve done that before dozens of times. So there’s a new repository. I’m going to go and add the remote and push my changes. Okay, there they are, nothing new I hope for most of you. So now our little utility’s published. People can use it. There is, however, one issue. So we kinda called it portable, but we haven’t really tested it on that many platforms and
compiler combinations, right? We tested it on Linux, and we’ve also tried it with GCC, with
MinGW GCC for Windows, ran it on the Wine emulation. But there’s also Mac OS. There’s also Visual Studio in Windows. There are also things
like FreeBSD and so on. We could test this manually. We can go through some virtual machines. For Mac OS you would need real hardware. But that would be quite
painful and tedious, and especially it would be unfortunate if we don’t ourselves care
about these platforms. So if I don’t care
about Mac OS or FreeBSD, I’m probably not gonna
do it if it’s painful. So in build2, we actually
have a better way to do it. So in build2, we run a public CI service for opensource projects. So we can actually test our project on all the major platforms and compilers without actually leaving our preferred development environment. So let’s see how that works. So we use the bdep CI command. So what that does, it
sends the specific commit, the specific version of our project for remote testing to the CI service. So let’s say yes here. So in return, we get a link, a URL, which we can go and take a
look at the build results. Okay, so it’s already started building. Let me refresh the page a couple of times. And we’ll see more builds, hopefully. So we can go take a look at
what build configurations are available. So currently we offer
basically all the major mainstream platforms, Linux,
Windows, FreeBSD, and Mac OS and the latest two
versions of main compilers for each of them, so
Visual Studio, Clang, GCC. We also have some
interesting combinations, for example a Clang with libc++ on Linux or Homebrew GCC on Mac OS. And there is a MinGW for Windows. So I think this gives you
a pretty good coverage for general purpose
platforms and compilers. Sorry? (man talking quietly) Not yet, maybe in the future. Okay, let’s go back to our build. So there are actually
12 build configurations in there. So you can see some of
them are still building, some of them have finished. So let’s take a look. Okay, there’s actually an error for Linux, on Linux, which is a bit surprising. We kinda tested it, was
our development machine. So if we go look at the logs, so what happens is the
header that we include, it actually comes from a separate package that is not installed by default. So the platform that we
probably the most sure about turned out to be actually
in not such a great shape. Oh well. Some surprises, right? Let’s take a look what else is there. Well, the pleasant
surprise is that it works out nicely on Mac OS. Everything built successfully. What else? FreeBSD also failed. We go look inside, the same story, except here we don’t
actually have a package. Here they’re just not
available on FreeBSD. There’s a completely different API that you use for generating UIDs. Let’s also take a look at
Windows with Visual Studio. Okay, so that looks good. So there’s a Visual Studio
14 build successfully. If we wait a little bit longer, there’s a Visual Studio 15
also built successfully. All right, somewhat a
mixed bag, all right? The platform that we were most sure about turned out to be in pretty bad shape. But otherwise, some
other pleasant surprises. And I think that’s okay. I think if we adopt
this organic development mindset, it’s fine. I published my project, someone can come, and if they are interested, they can come and help and improve it. Well, speaking of help, let’s say someone saw our project on GitHub, and they want to join the project, help us develop it, maybe fix some issues. So one of the key goals of build2 is to provide a uniform interface and consistent behavior
across all the platforms and compilers. So let’s see how that actually
translates to reality. So here, let’s say our
collaborator’s actually on Windows, right? They’re using a Windows for development. Let’s see how difficult it is actually for them to get started. So this is the proverbial Git
clone and now what question. I want to clone and get to
hacking as quickly as possible. So there’s my clone. So again, we use bdep init to initialize and create configuration all in one go. Let me just find it in
history, there it is. And then we can even try to run it. Generates, even the test worked. All right, so that wasn’t,
I think, too painful. It probably took me 30
seconds to get going. Okay, so I think then, not in bad shape. Some platforms don’t work, but hopefully someone will help fix that. So now let’s say we want to generate UIDs in another project of ours, right? So now it’s that part where
we take this functionality and put it in a library. And this is, I think is
a critical part again to achieving these thousands of packages and making everyone a library developer. If this is a five, 10 minutes process, fairly painless and frictionless, then I might create a
couple of libraries a day. There’s no issue with that. But if instead, as it is now, it’s a multi-day painful process of testing things in different places, different ways, then I might as well just duplicate the code. Yeah, there will be
maintenance, a headache, but you know it’s gonna be later. It’s not gonna be now, two days of pain. So let’s see how that all works out. Again, we use the new, bdep new command to create new project. Instead now, of an executable,
we say it’s a library, right? So the type changed to library. I’m calling it libstud-uuid. So I’m working on a
family of small libraries called libstud. The name is a play on the
pronunciation of namespace std. So the joke here is that most of these should have been in the
standard library long ago. But let’s not go there. Okay, so there we ran it. If you look at our directory,
there’s our executable. Right there’s the three
build configurations. And there’s now our library. And if we look inside, again, it’s a hello world library that is ready for us to be customized to suit our needs, a little bit more infrastructure there. But otherwise, it’s pretty
similar to an executable. Before we start modifying it, let’s again set our build infrastructure. This time, we’re just gonna reuse our existing build configuration. So there’s no reason for
us to create new ones. So we’re gonna use the
add mode of bdep init. So there’s GCC. There’s Clang. And there’s MinGW. Just to make sure everything is good, let’s build and run everything in all build configurations. So I’d bdep test all. All good. Okay, so now is the point where we take the functionality from our executable and put it into a library. I’m not gonna do it live. I’m just gonna copy the code over. It turns out, was
somewhat surprising to me, that generating a UID portably
on different platforms is actually more difficult
than one might think. In fact, on many
platforms, you might end up with a non-unique UUID. So we need to handle that situation. So let me take a look. So there’s now our source files. You can see there’s some platform-specific source files. And this Linux one maybe hopefully miraculously fixed our Linux issue. We’ll just again build and test it locally. Okay, all works good. I’m going to now commit it. I’ve already created a
repository on GitHub, so I’m just gonna add the remote quickly. And push it, and we’re gonna CI it again to see what’s going on. Again, sending it for remote testing, getting a link back. Let’s open it in a browser. It’s not building. Okay, so the build went off. While we are waiting, it
will take a few minutes, let’s go and start adjusting
our genuuid utility to actually use the library, right now, so we can just copy the functionality. Doesn’t make sense to
have it in two places. So we just are gonna add a
dependency on the library instead of having it there. We’re gonna go there,
there’s our listing again. So first place, let’s
start with a source file. So this one I’m gonna do live. Using the namespace std. I’ve gotta underline the space. And I’m gonna remove the gory details. So now our little
utility’s nice and clean. All right, okay, so here it included, and we called all the functions. So we need to link the library somewhere. So again, I’ll build file. The generated build file actually includes some commented out infrastructure for importing libraries
from other projects and linking them to our executable. So we’re just gonna use that. So this is very important, the assignment to this libs variable,
and there it’s listed as a prerequisite to our executable. So again, nice thing
is that we can get rid of this ugly explicit library linking, so now our build file is nice
and clean and simple again. Okay, so header included,
library linked, question? (person talking quietly) Include? (person talking quietly) Yeah, we’re running, this
is for the executable, this library is included
in the executable. Again, documented and explained the rationale in the documentation. Okay, so header included, library linked, where will our library come from? Well, natural place, from a package. So we’re gonna go and list
the dependency on a package. So the place we do it
is in the manifest file. So manifest describes
your build system project as a package, right? So it needs name, version,
some other meta information, license, for example, which
we can fix while active. So there is, again,
commented out suggestion how we might want to do that. We’re just gonna follow along. We’re gonna remove the
version constraint for now. We will get back to that later. Okay, so header included, library linked, package dependency set up. Where do packages come from? Package repositories, right? So that we specify in the
repositories.manifest file. Again, a couple of commented suggestions. One of them is, it looks
like a Git repository. So we’re just gonna play along. We’ll just put a GitHub
repository of our libstud UID. And let’s use the version that’s available from master for now. Great, so some of you, even to me, having to specify this
in four different places might sound bizarre. But there’s actually
good reasons for that. They’re quite technical. But in a nutshell, it’s flexibility and not having to repeat
yourself in multiple places. But again, it’s all explained quite well in the introduction, at least I think so. As I think is a recurring theme in C++, we have special needs. Okay, so there we’ve converted our project to use the library. Let’s go take a look at our build results. So there are all 12 build
configurations built. And they’re all green. So even the Linux issue
miraculously disappeared somehow. The fix is actually quite interesting. So if you are curious, you are welcome to go take a look or
find me after the talk, and I’ll tell you how it’s done. Okay, so our library’s
actually in quite a nice shape, which means that our utility will also be nice and portable, as promised. Let’s try to build it. So there’s quite a lot of output. So again, output
synchronization kicks in, right? We’ve added a new
dependency, meta information about the project changed. So it’s automatically synchronized with the build configuration
that we are building. There you can see the
packaging information is being fetched. Things got upgraded, and so on. Let’s also run test in all
other build configurations. So now that we have a dependency, we can run tests of our own, for our own utility, but it might actually make sense to run tests for
our dependencies, as well, seeing that we are using them. So we call it deep testing. So all we have to do is
just add the recursive, test recursively option. So there you can see all tests for the library,
as well as for the, our utility, also quite a handy thing. When things are nicely integrated, you can run tests on all the dependencies. Okay, let’s commit and push. We’re also gonna start the CI process, just to make sure
everything is in good shape. Notice that I don’t need to worry about, will the CI process find
my dependencies, and so on. It’s all taken care of. So there the build started. Again, while we wait, let’s go see how our Windows collaborator is doing. Now, this is a good example of where auto
synchronization is important. It would be really painful if I have to keep track of what
changes to the metastate of a project happened, on each pull, and do something manually. So in build2, all we have to do is pull and build. We don’t have to worry about it. Auto synchronization kicks in. So in this case, we’re actually getting a library installed and built, downloaded, built, and installed. Well, not installed, but to
be used by our executable. So again, this process was rather painless for our friend here. Okay, let’s take a look at our CI. So if we wait long enough,
they will all be green. There, FreeBSD is happy now. Linux, as well. So now we actually have a
fairly portable UID generator in our library. Okay. So we published a library to our GitHub. So how will people find
it, some potential users? Well, again, you can go to GitHub and search for UID
library or UID generator. You’ll get about a couple hundred results. Most of them are not even C++. And I think this is a good example. It will actually miss the most promising option, or before we just
published this library, which is Boost, right? If you go to GitHub, you
won’t find Boost there. And I think experience of other languages that are ahead of us on this field, like Rust, for example, shows that it’s really useful to
have a central repository of packages, an archived
repository of packages. In fact, it’s quite useful to have both. They in a sense complement each other. So it’s nice to have version
control-based repositories and archive-based repositories
for your package manager. So version control-based
repositories are great for development, right, that you’re able to set
up 30 seconds on GitHub. They’re also your project repository, at least in case of build2, is normally your package repository. You don’t actually have to
put it in anything else. And this is exactly how we’ve done it with our utility and a library, right? Remember when I put
this link to GitHub URL in the repositories.manifest file, I used the libstud UID Git repository as its package repository. So very convenient. There are problems with version
control-based repositories. As mentioned, the packages
are hard to discover. There’s GitHub, GitLab,
all other different hosting places, so you have
to go look at all of them. They’re also not very reliable. The users might decide, I’m
gonna delete this repository, and your program depends on it. Also, hosting companies
can go out of business, can get acquired. Version control repositories are also not as secure as archive-based
repositories can be. In the case of build2, our
repository’s authenticated, and packages in the repository are signed. So it’s pretty difficult to tamper with. We’ll see an example in a little bit. Finally, version control
repositories are pretty slow for creating the list
of available packages. You in most cases have
to clone actual content. And this especially becomes painful when you have, for
example, 100 dependencies, each living in its own package repository. And you can spend 15 minutes just creating the package information. Okay, so as you can see, the two
types of repositories actually complement
each other quite nicely. And our recommendation is to use your version control
repository for unreleased packages, for example, you implement
a feature for someone, and you say, hey, there is
a tag or there is a branch. Go check it out. And they can just use it
as a package repository. But at the same time, you would want to publish list versions of your project into an archive-based repository. So for build2, we run a central repository on cppget.org. You are free to run your own
archive-based repositories, like for example if you
wanted to have one internal in your company, or even a public one alternative to this one. You’re welcome to do that. So let’s just go take a brief look what it actually looks like. So if you go to the front page, we get a list of packages, and we can use a search box, for example,
to search for hello library. There we get a couple of results, and may immediately get some
information that is useful for you to make the selection, which one we prefer to
use, for example license, the number of dependencies. And once you narrow it down to a package, you can go and see which
versions are available and then different sections. Like some, for example, can be in stable, some in testing. So you can decide which ones you want. And if you go to a specific version, then besides even more
information about it, you see, for example,
all the build results for this version. So you can actually check, if you need Windows and this package fails to build on Windows, it’s probably not a good idea to use. Okay, so there’s cppget. So as an example, let’s try, let’s make a release of our library and publish it to cppget.org. Again, ideally we would want this process to be as frictionless
and painless as possible. So the only place we’re gonna change, need to change the
version is the manifest. It propagates from there everywhere else, ’cause I mentioned list
versions you manage yourself. There’s not any kind of information on messing around with it. Now maybe I did, yeah. Okay. Okay, so these are, we
just changed the version, and now we would commit our changes. Also gonna tag it. So we use this form that is adopted by quite a few projects, this notation v and then the version. So this tag, now that
were gonna push it now, now the users of our package can actually use our GitHub repository, and instead of master, for example, as I’ve shown in the repository URL, they
can specify the version if they prefer this specific version. And we’re pushing it. Okay, so now we’ve published, made it a list, published
it, tagged it in our GitHub. So now we want to
publish it to cppget.org. So this is automated with
the bdep publish command. In a sense, it’s similar to
CI in its workflow, right? It’s submitting our package for inclusion into archive-based repository
instead of testing. Provide some information, you know, who is submitting it and so on. And we’re gonna say yes. There’s some more information. We read that things got uploaded. So it creates an archive for our package and uploads it to submission service, tells us that the package
submission is queued and give us a link where
we can go take a look. Before we do that, let’s briefly discuss what happens to a
package once we submit it to an archive-based repository. In case of cppget, we’re
trying to strike a balance between a large repository
of low-quality packages on one hand and a
repository of a few packages of high quality on the other. But the submission process is so annoying and painful that nobody
bothers to submit it to them. So what we came up with
is our rule of one. So we don’t review our packages. We don’t decide whether
it’s worthy to be included. So provided your package
satisfies these simple rules, it’s gonna get included into cppget.org. So once you submit the package, it gets into a queue. So it’s a queue section of the repository where it’s tested, similar to CI. And provided that it builds
on at least one platform and compiler, it’s
gonna get moved to test. So the rationale here is that if it builds on at least one platform and compiler, then it can be useful to someone, right? If it doesn’t build on
anywhere, probably not. So once in testing, it’s
gonna be tested on a bit more compilers and platforms. So we also build them with older versions of the compiler, so if
you want to support those, you can see what is the status there. Also, the users of your project, they have a chance to test things against their projects, for example if they depend on your library. So provided there are no issues there and your project has at least one test, then it will be moved to stable. So again, here we require you to have at least one test. And I think it’s a reasonable expectation, because think about it. If you build a library and you actually, and you don’t have any tests,
you don’t actually know whether your library links,
right, let alone run. For example, for me
personally, it’s very common to forget to export
symbols from a library. So if you don’t even try to link it, you cannot really say anything about it. It’s also possible that a package is no longer maintained, package version is no longer maintained
and it fails to build on all the platforms and compilers that people can reasonably care about. So once a package is in stable, we actually never remove
it from the repository. So that’s the liability guarantee that I mentioned earlier. But we might move it
to the legacy section, where we stop building it, right? So if it’s no longer maintained, then new projects probably
shouldn’t depend on it. Okay, so with that understanding, this message is probably now more, makes more sense. But we get a link to the queue section, where we can see our package. There it is. It’s already built, as we
can see, and all green. So it will sit there for some time, and it will be moved
to the testing section, sit there for a little bit longer, and then will end up in stable. So I actually went through this process a couple of days ago, ’cause
we don’t have time to wait. And I published the library to cppget.org. So it’s actually already available in the stable section of the repository. There it is. So just as an example, let’s go and add a dependency, add this new repository to our executable. And we’ll also add a version constraint to our dependency. So again, we will
uncomment the suggestion. So as you can see, the link is already actually contains the
cppget.org stable section, commented out. So if you want to use
packages from cppget.org, you just need to uncomment this. There’s also the trust field. So this is that repository authentication I mentioned earlier. So if we don’t specify
the fingerprint there, then we will be asked to manually authenticate the repository
every time we build things, which would be annoying. So in a sense, we’re establishing
a chain of trust here. As you can see, I still
kept the Git repository as a source of this package, because in case if I want to test
some unreleased version or some such. Let’s also add the
dependency constraint here now that we have a
stable version released. Just briefly about dependency constraints, those of you who have used
other package managers probably are familiar with
carrot and tilde constraints. Right, the carrot constraint is any major, later major patch or minor patch version, and tilde constraint is more conservative, only allows new patch versions. So the carrot constraint
is a good default. There are also other operators and ranges. So we’re just gonna use that. So we specify we are happy with any later version, as long as
the major version stays live, which means it’s source compatible. Okay, I’m gonna run tests again. All the synchronization
things would get pulled out. All looks good. And let me also commit this change. So I want to go see how it is on Windows, how this will all play out on Windows. So we gotta come over there. Again, it would be, now that we’ve placed a version constraint, the version that we have built on Windows early no longer satisfies
this constraint, right? So again, it would be unfortunate if we have to do anything manually there. Luckily we don’t. We automatically synchronized, and our old version, as you can see, is automatically upgraded to the latest. Again, pretty painless experience. (person talking quietly) No, source, it’s all source, no binaries. So this is a source-based package manager. Okay, and that’s pretty much it. Just to summarize quickly what we’ve done, so we start from an executable. We then factored the UID
generation to a library, fixed the tops, and now
it’s nice and portable. We then made a list of a library and published it to an
archive-based repository. All right, and just to summarize key points about build2, I think the overall
thing is that development in C++ doesn’t have to be painful. There is no reason why we cannot do as good or even better as Rust. For example, some of the things
like continuous versioning or the CI service is something that you actually don’t find in Rust, and I think it’s an
improvement over their model. So just to summarize, it’s an integrated build toolchain for
C++, so package manager and project dependency
manager and build system. They’re all, their design
informs each other. It covers the entire project
development life cycle. It has a uniform interface
across platforms and compilers, supports both archive and version control-based repositories,
which is really handy, complement each other nicely. Finally, it’s dependency-free. All you need is a C++ compiler. So, and this brings me to our offer to the C++ community. So convert your, start new projects or convert your existing
projects to build2, and you can CI them for free
on all the major platforms and compilers. And so this is what’s
currently available to you, and we are also planning,
so we will keep upgrading, you know, Microsoft releases new version, Clang releases new version, you don’t actually need to do anything. We will keep upgrading these machines and make them available to you. Also planning to add some static analyzers and sanitizer builds, as well. We actually have them
locally over in testing but haven’t pushed them to CI. And additionally, if you, so
you can use the CI service even if you don’t publish
anything to cppget.org. If you do, then you’ll get
some additional testing. We also test there for older compilers if you want to support that, and so on. Okay, thank you very much. (applause) Okay, we have a few minutes left, but I’m happy to stay. So if some people have questions. (person talking quietly) Okay, so the question, was how do I build debug or release versions? Is there some command line argument? So you’ve seen how I’ve created the build configurations
for Clang, GCC, and MinGW. So you’ll do it exactly the same. You can create two build
configurations for GCC, one debug, one release. (person talking quietly) So the question is, is there an easy way to consume libraries
that don’t use build2, such as other autoconf-based
or omega-based? Yeah, the answer is yes, especially if your library produced pkg-config file, as most autoconf libraries do. This all works automatically. So I guess to put it another way, the way to interface with other libraries that use other build
systems is by installation. So you install them, not necessarily into your system location, but somewhere, and then even if they don’t
provide pkg-config file, it will still be usable. But if they provide pkg-config, then build2 will automatically use that. (person talking quietly) Okay, so the question is, is it possible to upload, submit a library
that does not use build2 to the repository? And the answer is no, and
the reason for that is it’s just not gonna work. We won’t be able to build it, or, yeah. (person talking quietly) Let’s maybe come around, okay? Can we, yep? (person talking quietly) Okay, so the question is who can, let me see if I got that, who can dictate whether the library is
built static or shared or maybe both? And it’s actually both. So you can, it’s not very common, but you can for example say, my library will only build
static or only shared. But also, consumers can specify priority and which one they want to look at. So it’s both ways, in a sense, okay? (person talking quietly) Sorry? (person talking quietly) R paths, okay, good question. So the question is do we handle R paths? And the answer is yes, and we go actually a step further and we
emulate R path on Windows. So you can actually run
the executables in place. You don’t need to stash
them in the bin directory with your DLLs and end up with
conflicting names and so on. So yeah, we do. (person talking quietly) Okay, so I think we are out of time, but yeah, I’m welcome to
stay and answer questions. Thank you very much. (applause)

3 comments on “CppCon 2018: “C++ Dependency Management: from Package Consumption to Project Development”

  1. The bad thing about build systems is that if they don't have good IDE integration they are of very limited use. And the only build system that is providing this is "cmake". Unfortunately cmake has become a feature dumpster with a terrible language and documentation. I have a private project specific meta build system that for this reason build buildsystems (cmake, msbuild, packaging buildsystems for snap/flatpacks) that build buildsystems (makefile,ninja). Anyone see the problem we have here?

  2. I am very open to adopting a new build chain, nothing we use is good anyway. The chain needs to be designed with consideration to how an IDE interacts with it.

  3. Зашел с реддита чтобы прочитать в описании имя автора, узнать русский ли он. Не прогадал, этот акцент омг просто

Leave a Reply

Your email address will not be published. Required fields are marked *