support plan for members?

Hi folks, at Aptivate we’re happy users of provided by WebArchs here.

I had a few general questions (some have been answered kinda, but anyway):

  • Could we get faster CI builds? More parallel builds? I want it as fast as possible :slight_smile:
  • What are the other CE features we can get that we don’t already have?
  • Can we have the Gitlab page domains for building static sites?

But even without those, I’ve sent a number of support requests and I know that @chris has sorted us out with a bigger repository limit due to big migration we did. So, at some point I am wondering if WebArchs are looking to cover their costs and make this more sustainable with some support plan for members?

So, yeah, bit of a brain dump but there it is!


1 Like

I’d like to add support for for static sites at some point, when I can find the time. Only potential issue with this, last time I looked, was that we would need to sort out our own scripts for provisioning Let’s Encrypt certs, but this isn’t a big deal and now that wild card certs are possible that might be the way to go. Perhaps a domain other than could be used, how about or something else?

I’m not sure what the other GitLab CE features we could add are, let me know if you find some?

We don’t have currently have any plans for raising additional income to cover the costs of, do you have any suggestions?

Currently the server has acces to 6 CPU cores and 16G of RAM, the processors are:

model name      : AMD Opteron(tm) Processor 6128
cpu MHz         : 1884.607
cache size      : 512 KB
bogomips        : 3769.63

We could rebuild or move it to the newer front facing server which has low energy CPUs (but probably faster RAM):

model name      : Intel(R) Xeon(R) CPU E5-2630L v4 @ 1.80GHz
cpu MHz         : 1795.876
cache size      : 25600 KB
bogomips        : 3591.70

And although the CPU might not be much different in speed we could potentially, if there is spare space, run it off the RAID1 SSDs that are the system disks rather than over the network off the file server as this might be faster and we don’t need backups of this server as we can spin it up from scratch using Ansible, the only concern about doing this would be that it would add additional wear and tear to the SSDs and we have 45 virtual servers running on this machine and can’t afford for it to have any downtime.

The machine it is on does have 10k SAS disk set up with (I believe) software RAID5 and perhaps moving it to them might help some, I’ll discuss it with @kate and see what we can come up with, but it might be the case that we can’t really do much without faster hardware or by using external servers.

Following are some Munin graphs from today to give you an idea of what is going on:

diskstats_iops-day load-day diskstats_utilization-day diskstats_throughput-day swap-day diskstats_latency-day interrupts-day if_docker0-day cpu-day if_eth0-day memory-day

I think I saw the Github auto-import which needed to be enabled by you. It allows you to authenticate automatically through the usual OpenID flow and not have to generate a separate API token. Would be easier to migrate things if you can manage it.

Regarding the CI builds and performance, I just want faster builds and I assume that costs you in all different types of ways (improvements, maintenance, investigation, etc.), therefore that is why I am inquiring about whether or not you’re going to need to add support costs for the service.

As an example, for 2 parallel builds which take a couple of minutes on my local, it takes 55 minutes on the CI system over at Pipeline · aptivate / ansible-roles / software-collections · GitLab. What does it do? It runs a pipenv install of python packages (probably around 60 or so, but pipenv does concurrent installs) and then runs an Ansible test suite which uses Docker. It would be really great if we could chop down these build times.

Thanks for the thoughtful response with all the information!

I believe that these are the Integrate your server with GitHub instructions that need to be followed so I’ll try to get that done at some point this week.

We don’t have enough spare RAM on our faster front-facing server to run so it’ll have to stay on the slower one for now, but since the main issue appears to be iowait:


I don’t know how much that would help in any case…

Perhaps we need to look at getting a fairly powerful 1U dedicated server for, we could get something second hand like this Dell Poweredge R610 1U Server,2 x Intel Xeon Quad Core X5570 @ 2.93GHz 96GB Ram for £199.99, it would probably be terrible for electricity consumption but most of the time it would be idle, so perhaps that would be OK, it would just need a SSD (or two for RAID1) and wouldn’t take long to set up, what do people think?

I have just come across this (after puzzling why repos with no CI configured were resulting in CI failures):

NOTE: Enabled by default: Starting with GitLab 11.3, the Auto DevOps pipeline will be enabled by default for all projects. If it’s not explicitly enabled for the project, Auto DevOps will be automatically disabled on the first pipeline failure.

Index · Autodevops · Topics · Help · GitLab

At some point we are going to need to host on a Kubernetes cluster as it is a requirement for Auto DevOps.

Also, before then, the Docker Executor should probably be switched to the Docker Machine Executor in order to use auto-scaling:

The Docker Machine is a special version of the Docker executor with support for auto-scaling. It works like the normal Docker executor but with build hosts created on demand by Docker Machine .

In the meantime the resources that the single Docker container can use have been increased to match the settings at the bottom of this README, perhaps this will make it run faster for now?

@lukewm I think this has been done, could you check it when you have a chance, I think you will need to logout of first?

I have checked that the Sign in with GitHub can’t be used to create an account:

Signing in using your GitHub account without a pre-existing GitLab account is not allowed. Create a GitLab account first, and then connect it to your GitHub account.

But I haven’t tried anything else…

Listen, if you’re investing into this and putting new hardware in place (and I think it is great to invest because this becomes a really great resource for other coops) then Aptivate should really be involved in financially supporting this. Let’s talk about that.

Whatever you’re doing, I’ve got tremendously faster builds today. From my last report, I was running a single job and it was taking 55 minutes. Now, I’m running 4 parallell jobs and they’re all finishing in 9 minutes! Success over at:

Pipeline · aptivate / ansible-roles / mysql-server · GitLab.

However, the runner fell over qutie soon after :slight_smile:

centos-6-mysql-51 (#4231) · Jobs · aptivate / ansible-roles / mysql-server · GitLab

great :smiley:

I believe fixed that, by changing this value, let me know if you have further issues?

Awesome! I’ll be watching that repository now :slight_smile:

Builds working again:

Thanks a lot!!!

I just ran a couple of commands in a Debian Docker container created via GitLab CI to see what resources are available:

free -h
                total        used        free      shared  buff/cache   available
  Mem:            15G        361M         12G         48M        2.4G         15G
  Swap:          2.0G         76K        2.0G

cat /proc/cpuinfo | grep ^processor
  processor	: 0
  processor	: 1
  processor	: 2
  processor	: 3
  processor	: 4
  processor	: 5

I think this is the result of the edits I made to /etc/gitlab-runner/config.toml on the VM.

For reference I created a new gitdotcoop account at for the integration of with GitHub to enable the importing of projects from GitHub and to enable logins to to be completed using your GitHub account.

Might as well use this thread for news…

We have enabled the GitLab Docker Registry at and this means that you can now use GitLab CI to build and host Docker containers :slight_smile:.

So far we have only built a couple of containers, one to be used by the Bind 9 zonefile repos CI (having packages pre-installed saves one minute per test after each update) and another with a recent version of Ansible for use by repos that are doing deployment using Ansible from Docker containers.

Thanks to @lukewm we now also have have SSO (single sign on) working from to the Discourse forum and GitLab — this means if you are logged into you can click through to login to these other sites.

Ah well here! I just opened the can of libreworms … :wink: Nice work!

Really sick to see the container registry. I’ll be putting this to good work ASAP.

1 Like