parent
044b8da55a
commit
d5a822884f
288 changed files with 13040 additions and 1 deletions
6
bots/.gitignore
vendored
Normal file
6
bots/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,6 @@
|
||||||
|
*.pyc
|
||||||
|
*.qcow2
|
||||||
|
*.partial
|
||||||
|
*.xz
|
||||||
|
/*.log
|
||||||
|
/build-results/
|
||||||
38
bots/HACKING.md
Normal file
38
bots/HACKING.md
Normal file
|
|
@ -0,0 +1,38 @@
|
||||||
|
# Hacking on the Cockpit Bots
|
||||||
|
|
||||||
|
These are automated bots and testing that works on the Cockpit project. This
|
||||||
|
includes updating operating system images, bringing in changes from other
|
||||||
|
projects, releasing Cockpit and more.
|
||||||
|
|
||||||
|
## Environment for the bots
|
||||||
|
|
||||||
|
The bots work in containers that are built in the [cockpituous](https://github.com/cockpit-project/cockpituous)
|
||||||
|
repository. New dependencies should be added there in the `tests/Dockerfile`
|
||||||
|
file in that repository.
|
||||||
|
|
||||||
|
## Invoking the bots
|
||||||
|
|
||||||
|
1. The containers in the `cockpitous` repository invoke the `.tasks` file
|
||||||
|
at root of this repository.
|
||||||
|
1. The ```.tasks``` file prints out a list of possible tasks on standard out.
|
||||||
|
1. The printed tasks are sorted in alphabetical reverse order, and one of the
|
||||||
|
first items in the list is executed.
|
||||||
|
|
||||||
|
## The bots themselves
|
||||||
|
|
||||||
|
Most bots are python scripts. They live in this `bots/` directory. Shared code
|
||||||
|
is in the `bots/tasks` directory.
|
||||||
|
|
||||||
|
## Bots filing issues
|
||||||
|
|
||||||
|
Many bots file or work with issues in GitHub repository. We can use issues to tell
|
||||||
|
bots what to do. Often certan bots will just file issues for tasks that are outstanding.
|
||||||
|
And in many cases other bots will then perform those tasks.
|
||||||
|
|
||||||
|
These bots are listed in the `bots/issue-scan` file. They are written using the
|
||||||
|
`bots/tasks/__init__.py` code, and you can see `bots/example-task` for an
|
||||||
|
example of one.
|
||||||
|
|
||||||
|
## Bots printing output
|
||||||
|
|
||||||
|
The bot output is posted using the cockpitous [sink](https://github.com/cockpit-project/cockpituous/tree/master/sink) code. See that link for how it works.
|
||||||
115
bots/README.md
Normal file
115
bots/README.md
Normal file
|
|
@ -0,0 +1,115 @@
|
||||||
|
# Cockpit Bots
|
||||||
|
|
||||||
|
These are automated bots and tools that work on Cockpit. This
|
||||||
|
includes updating operating system images, testing changes,
|
||||||
|
releasing Cockpit and more.
|
||||||
|
|
||||||
|
## Images
|
||||||
|
|
||||||
|
In order to test Cockpit-related projects, they are staged into an operating
|
||||||
|
system image. These images are tracked in the ```bots/images``` directory.
|
||||||
|
|
||||||
|
These well known image names are expected to contain no ```.```
|
||||||
|
characters and have no file name extension.
|
||||||
|
|
||||||
|
For managing these images:
|
||||||
|
|
||||||
|
* image-download: Download test images
|
||||||
|
* image-upload: Upload test images
|
||||||
|
* image-create: Create test machine images
|
||||||
|
* image-customize: Generic tool to install packages, upload files, or run
|
||||||
|
commands in a test machine image
|
||||||
|
* image-prepare: Build and install Cockpit packages into a test machine image
|
||||||
|
(specific to the cockpit project itself, thus it is in test/, not bots/)
|
||||||
|
|
||||||
|
For debugging the images:
|
||||||
|
|
||||||
|
* bots/vm-run: Run a test machine image
|
||||||
|
* bots/vm-reset: Remove all overlays from image-customize, image-prepare, etc
|
||||||
|
from test/images/
|
||||||
|
|
||||||
|
In case of `qemu-system-x86_64: -netdev bridge,br=cockpit1,id=bridge0: bridge helper failed`
|
||||||
|
error, please [allow][1] `qemu-bridge-helper` to access the bridge settings.
|
||||||
|
|
||||||
|
To check when images will automatically be refreshed by the bots
|
||||||
|
use the image-trigger tool:
|
||||||
|
|
||||||
|
$ bots/image-trigger -vd
|
||||||
|
|
||||||
|
## Tests
|
||||||
|
|
||||||
|
The bots automatically run the tests as needed on pull requests
|
||||||
|
and branches. To check when and where tests will be run, use the
|
||||||
|
tests-scan tool:
|
||||||
|
|
||||||
|
$ bots/tests-scan -vd
|
||||||
|
|
||||||
|
## Integration with GitHub
|
||||||
|
|
||||||
|
A number of machines are watching our GitHub repository and are
|
||||||
|
executing tests for pull requests as well as making new images.
|
||||||
|
|
||||||
|
Most of this happens automatically, but you can influence their
|
||||||
|
actions with the tests-trigger utility in this directory.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
You need a GitHub token in ~/.config/github-token. You can create one
|
||||||
|
for your account at
|
||||||
|
|
||||||
|
https://github.com/settings/tokens
|
||||||
|
|
||||||
|
When generating a new personal access token, the scope only needs to
|
||||||
|
encompass public_repo (or repo if you're accessing a private repo).
|
||||||
|
|
||||||
|
### Retrying a failed test
|
||||||
|
|
||||||
|
If you want to run the "verify/fedora-atomic" testsuite again for pull
|
||||||
|
request #1234, run tests-trigger like so:
|
||||||
|
|
||||||
|
$ bots/tests-trigger 1234 verify/fedora-atomic
|
||||||
|
|
||||||
|
### Testing a pull request by a non-whitelisted user
|
||||||
|
|
||||||
|
If you want to run all tests on pull request #1234 that has been
|
||||||
|
opened by someone who is not in our white-list, run tests-trigger
|
||||||
|
like so:
|
||||||
|
|
||||||
|
$ bots/tests-trigger -f 1234
|
||||||
|
|
||||||
|
Of course, you should make sure that the pull request is proper and
|
||||||
|
doesn't execute evil code during tests.
|
||||||
|
|
||||||
|
### Refreshing a test image
|
||||||
|
|
||||||
|
Test images are refreshed automatically once per week, and even if the
|
||||||
|
last refresh has failed, the machines wait one week before trying again.
|
||||||
|
|
||||||
|
If you want the machines to refresh the fedora-atomic image immediately,
|
||||||
|
run image-trigger like so:
|
||||||
|
|
||||||
|
$ bots/image-trigger fedora-atomic
|
||||||
|
|
||||||
|
### Creating new images for a pull request
|
||||||
|
|
||||||
|
If as part of some new feature you need to change the content of some
|
||||||
|
or all images, you can ask the machines to create those images.
|
||||||
|
|
||||||
|
If you want to have a new fedora-atomic image for pull request #1234, add
|
||||||
|
a bullet point to that pull request's description like so, and add the
|
||||||
|
"bot" label to the pull request.
|
||||||
|
|
||||||
|
* [ ] image-refresh fedora-atomic
|
||||||
|
|
||||||
|
The machines will post comments to the pull request about their
|
||||||
|
progress and at the end there will be links to commits with the new
|
||||||
|
images. You can then include these commits into the pull request in
|
||||||
|
any way you like.
|
||||||
|
|
||||||
|
If you are certain about the changes to the images, it is probably a
|
||||||
|
good idea to make a dedicated pull request just for the images. That
|
||||||
|
pull request can then hopefully be merged to master faster. If
|
||||||
|
instead the images are created on the main feature pull request and
|
||||||
|
sit there for a long time, they might cause annoying merge conflicts.
|
||||||
|
|
||||||
|
[1]: https://blog.christophersmart.com/2016/08/31/configuring-qemu-bridge-helper-after-access-denied-by-acl-file-error/
|
||||||
49
bots/example-task
Executable file
49
bots/example-task
Executable file
|
|
@ -0,0 +1,49 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2016 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
# To use this example add a line to an issue with the "bot" label
|
||||||
|
#
|
||||||
|
# * [ ] example-bot 20
|
||||||
|
#
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
|
||||||
|
sys.dont_write_bytecode = True
|
||||||
|
|
||||||
|
import task
|
||||||
|
|
||||||
|
BOTS = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
BASE = os.path.normpath(os.path.join(BOTS, ".."))
|
||||||
|
|
||||||
|
def run(argument, verbose=False, **kwargs):
|
||||||
|
try:
|
||||||
|
int(argument)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
return "Failed to parse argument"
|
||||||
|
|
||||||
|
sys.stdout.write("Example message to log\n")
|
||||||
|
|
||||||
|
# Attach the package.json script as an example
|
||||||
|
task.attach("./package.json")
|
||||||
|
time.sleep(20)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
task.main(function=run, title="Example bot task")
|
||||||
181
bots/flakes-refresh
Executable file
181
bots/flakes-refresh
Executable file
|
|
@ -0,0 +1,181 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import os
|
||||||
|
import urllib
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
|
||||||
|
sys.dont_write_bytecode = True
|
||||||
|
|
||||||
|
import task
|
||||||
|
|
||||||
|
NUMBER_OPEN_ISSUES = 7 # How many issues do we want to have open at a given time?
|
||||||
|
|
||||||
|
# How far back does our data go? If a flake gets fixed but is still
|
||||||
|
# flaky after this many days, the bots open another issue.
|
||||||
|
|
||||||
|
WINDOW_DAYS = 21
|
||||||
|
|
||||||
|
# This parses the output JSONL format discussed here, where various
|
||||||
|
# values are grouped:
|
||||||
|
#
|
||||||
|
# https://github.com/cockpit-project/cockpituous/blob/master/learn/README.md
|
||||||
|
|
||||||
|
# Here we're looking for a field in a record that only has one value
|
||||||
|
def value(record, field):
|
||||||
|
values = record.get(field, [])
|
||||||
|
if len(values) == 1:
|
||||||
|
return values[0][0] or ""
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Here we're looking for the count of a specific field/value in the record
|
||||||
|
def count(record, field, only):
|
||||||
|
values = record.get(field, [])
|
||||||
|
for value, count in values:
|
||||||
|
if value != only:
|
||||||
|
continue
|
||||||
|
return count
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# For linking flakes to test logs
|
||||||
|
|
||||||
|
def slurp_one(url, n, logs):
|
||||||
|
items_url = url + str(n) + "/items.jsonl"
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(items_url) as f:
|
||||||
|
for line in f.readlines():
|
||||||
|
try:
|
||||||
|
record = json.loads(line.decode('utf-8'))
|
||||||
|
logs.setdefault(record["test"], [ ]).append(record["url"])
|
||||||
|
except ValueError as ex:
|
||||||
|
sys.stderr.write("{0}: {1}\n".format(url, ex))
|
||||||
|
except urllib.error.URLError as ex:
|
||||||
|
if ex.code == 404:
|
||||||
|
return False
|
||||||
|
raise
|
||||||
|
return True
|
||||||
|
|
||||||
|
def slurp_failure_logs(url):
|
||||||
|
logs = { }
|
||||||
|
n = 0
|
||||||
|
while slurp_one(url, n, logs):
|
||||||
|
n = n + 1
|
||||||
|
return logs
|
||||||
|
|
||||||
|
def get_failure_logs(failure_logs, name, context):
|
||||||
|
match = context.replace("/", "-")
|
||||||
|
return sorted(filter(lambda url: match in url, failure_logs[name]), reverse=True)[0:10]
|
||||||
|
|
||||||
|
# Main
|
||||||
|
|
||||||
|
def run(context, verbose=False, **kwargs):
|
||||||
|
api = task.github.GitHub()
|
||||||
|
|
||||||
|
open_issues = api.issues(labels=[ "flake" ])
|
||||||
|
create_count = NUMBER_OPEN_ISSUES - len(open_issues)
|
||||||
|
|
||||||
|
if create_count <= 0:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
sys.stderr.write("Going to create %s new flake issue(s)\n" % create_count)
|
||||||
|
|
||||||
|
host = os.environ.get("COCKPIT_LEARN_SERVICE_HOST", "learn-cockpit.apps.ci.centos.org")
|
||||||
|
port = os.environ.get("COCKPIT_LEARN_SERVICE_PORT", "443")
|
||||||
|
url = "{0}://{1}:{2}/active/".format("https" if port == "443" else "http", host, port)
|
||||||
|
|
||||||
|
failure_logs = slurp_failure_logs(url)
|
||||||
|
|
||||||
|
# Retrieve the URL
|
||||||
|
statistics = [ ]
|
||||||
|
with urllib.request.urlopen(url + "statistics.jsonl") as f:
|
||||||
|
for line in f.readlines():
|
||||||
|
try:
|
||||||
|
record = json.loads(line.decode('utf-8'))
|
||||||
|
statistics.append(record)
|
||||||
|
except ValueError as ex:
|
||||||
|
sys.stderr.write("{0}: {1}\n".format(url, ex))
|
||||||
|
|
||||||
|
tests = { }
|
||||||
|
|
||||||
|
for record in statistics:
|
||||||
|
test = value(record, "test")
|
||||||
|
context = value(record, "context")
|
||||||
|
status = value(record, "status")
|
||||||
|
tracker = value(record, "tracker")
|
||||||
|
|
||||||
|
# Flaky tests only score on those that fail and are not tracked
|
||||||
|
if test is not None and status == "failure" and not tracker:
|
||||||
|
merged = count(record, "merged", True)
|
||||||
|
not_merged = count(record, "merged", False)
|
||||||
|
null_merged = count(record, "merged", None)
|
||||||
|
total = merged + not_merged + null_merged
|
||||||
|
|
||||||
|
# And the key is that they were merged anyway
|
||||||
|
if total > 10:
|
||||||
|
tests.setdefault(test, [ ]).append((merged / total, context, record))
|
||||||
|
|
||||||
|
scores = [ ]
|
||||||
|
|
||||||
|
for n, t in tests.items():
|
||||||
|
scores.append((sum(map(lambda f: f[0], t))/len(t), n, t))
|
||||||
|
|
||||||
|
closed_issues = api.issues(labels=["flake"], state="closed", since=(time.time() - (WINDOW_DAYS * 86400)))
|
||||||
|
|
||||||
|
def find_in_issues(issues, name):
|
||||||
|
for issue in issues:
|
||||||
|
if name in issue["title"]:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def url_desc(url):
|
||||||
|
m = re.search("pull-[0-9]+", url)
|
||||||
|
return m.group(0) if m else url
|
||||||
|
|
||||||
|
def failure_description(name, f, logs):
|
||||||
|
return ("%s%% on %s\n" % (int(f[0]*100), f[1]) +
|
||||||
|
"".join(map(lambda url: " - [%s](%s)\n" % (url_desc(url), url),
|
||||||
|
get_failure_logs(logs, name, f[1]))))
|
||||||
|
|
||||||
|
scores.sort(reverse=True)
|
||||||
|
for score, name, failures in scores:
|
||||||
|
if find_in_issues(open_issues, name) or find_in_issues(closed_issues, name):
|
||||||
|
continue
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
sys.stderr.write("Opening issue for %s\n" % name)
|
||||||
|
source = "<details><summary>Source material</summary>\n\n```json\n%s\n```\n</details>\n" % "\n".join(map(lambda f: json.dumps(f[2], indent=2), failures))
|
||||||
|
data = {
|
||||||
|
"title": "%s is flaky" % name,
|
||||||
|
"body": ("\n".join(map(lambda f: failure_description(name, f, failure_logs), failures)) +
|
||||||
|
"\n\n" + source),
|
||||||
|
"labels": [ "flake" ]
|
||||||
|
}
|
||||||
|
api.post("issues", data)
|
||||||
|
create_count -= 1
|
||||||
|
if create_count == 0:
|
||||||
|
break
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
task.main(function=run, title="Create issues for test flakes")
|
||||||
73
bots/github-info
Executable file
73
bots/github-info
Executable file
|
|
@ -0,0 +1,73 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2015 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
# Shared GitHub code. When run as a script, we print out info about
|
||||||
|
# our GitHub interacition.
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import datetime
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.dont_write_bytecode = True
|
||||||
|
|
||||||
|
from task import github
|
||||||
|
|
||||||
|
def httpdate(dt):
|
||||||
|
"""Return a string representation of a date according to RFC 1123
|
||||||
|
(HTTP/1.1).
|
||||||
|
|
||||||
|
The supplied date must be in UTC.
|
||||||
|
|
||||||
|
From: http://stackoverflow.com/a/225106
|
||||||
|
|
||||||
|
"""
|
||||||
|
weekday = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"][dt.weekday()]
|
||||||
|
month = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep",
|
||||||
|
"Oct", "Nov", "Dec"][dt.month - 1]
|
||||||
|
return "%s, %02d %s %04d %02d:%02d:%02d GMT" % (weekday, dt.day, month,
|
||||||
|
dt.year, dt.hour, dt.minute, dt.second)
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description='Test GitHub rate limits')
|
||||||
|
parser.parse_args()
|
||||||
|
|
||||||
|
# in order for the limit not to be affected by the call itself,
|
||||||
|
# use a conditional request with a timestamp in the future
|
||||||
|
|
||||||
|
future_timestamp = datetime.datetime.utcnow() + datetime.timedelta(seconds=3600)
|
||||||
|
|
||||||
|
api = github.GitHub()
|
||||||
|
headers = { 'If-Modified-Since': httpdate(future_timestamp) }
|
||||||
|
response = api.request("GET", "git/refs/heads/master", "", headers)
|
||||||
|
sys.stdout.write("Rate limits:\n")
|
||||||
|
for entry in ["X-RateLimit-Limit", "X-RateLimit-Remaining", "X-RateLimit-Reset"]:
|
||||||
|
entries = [t for t in response['headers'].items() if t[0].lower() == entry.lower()]
|
||||||
|
if entries:
|
||||||
|
if entry == "X-RateLimit-Reset":
|
||||||
|
try:
|
||||||
|
readable = datetime.datetime.utcfromtimestamp(float(entries[0][1])).isoformat()
|
||||||
|
except:
|
||||||
|
readable = "parse error"
|
||||||
|
pass
|
||||||
|
sys.stdout.write("{0}: {1} ({2})\n".format(entry, entries[0][1], readable))
|
||||||
|
else:
|
||||||
|
sys.stdout.write("{0}: {1}\n".format(entry, entries[0][1]))
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
||||||
210
bots/image-create
Executable file
210
bots/image-create
Executable file
|
|
@ -0,0 +1,210 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2015 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
# image-create -- Make a root image suitable for use with vm-run.
|
||||||
|
#
|
||||||
|
# Installs the OS indicated by TEST_OS into the image
|
||||||
|
# for test machine and tweaks it to be useable with
|
||||||
|
# vm-run and testlib.py.
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import tempfile
|
||||||
|
|
||||||
|
BOTS = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
BASE = os.path.normpath(os.path.join(BOTS, ".."))
|
||||||
|
|
||||||
|
from machine import testvm
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(description='Create a virtual machine image')
|
||||||
|
parser.add_argument('-v', '--verbose', action='store_true', help='Display verbose progress details')
|
||||||
|
parser.add_argument('-s', '--sit', action='store_true', help='Sit and wait if setup script fails')
|
||||||
|
parser.add_argument('-n', '--no-save', action='store_true', help='Don\'t save the new image')
|
||||||
|
parser.add_argument('-u', '--upload', action='store_true', help='Upload the image after creation')
|
||||||
|
parser.add_argument('--no-build', action='store_true', dest='no_build',
|
||||||
|
help='Don''t build packages and create the vm without build capabilities')
|
||||||
|
parser.add_argument("--store", default=None, help="Where to send images")
|
||||||
|
parser.add_argument('image', help='The image to create')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# default to --no-build for some images
|
||||||
|
if args.image in ["candlepin", "continuous-atomic", "fedora-atomic", "ipa", "rhel-atomic", "selenium", "openshift"]:
|
||||||
|
if not args.no_build:
|
||||||
|
if args.verbose:
|
||||||
|
print("Creating machine without build capabilities based on the image type")
|
||||||
|
args.no_build = True
|
||||||
|
|
||||||
|
class MachineBuilder:
|
||||||
|
def __init__(self, machine):
|
||||||
|
tempdir = testvm.get_temp_dir()
|
||||||
|
self.machine = machine
|
||||||
|
|
||||||
|
os.makedirs(tempdir, 0o750, exist_ok=True)
|
||||||
|
|
||||||
|
# Use a tmp filename
|
||||||
|
self.target_file = self.machine.image_file
|
||||||
|
fp, self.machine.image_file = tempfile.mkstemp(dir=tempdir, prefix=self.machine.image, suffix=".qcow2")
|
||||||
|
os.close(fp)
|
||||||
|
|
||||||
|
def bootstrap_system(self):
|
||||||
|
assert not self.machine._domain
|
||||||
|
|
||||||
|
os.makedirs(self.machine.run_dir, 0o750, exist_ok=True)
|
||||||
|
|
||||||
|
bootstrap_script = os.path.join(testvm.SCRIPTS_DIR, "%s.bootstrap" % (self.machine.image, ))
|
||||||
|
|
||||||
|
|
||||||
|
if os.path.isfile(bootstrap_script):
|
||||||
|
subprocess.check_call([ bootstrap_script, self.machine.image_file ])
|
||||||
|
else:
|
||||||
|
raise testvm.Failure("Unsupported OS %s: %s not found." % (self.machine.image, bootstrap_script))
|
||||||
|
|
||||||
|
def run_setup_script(self, script):
|
||||||
|
"""Prepare a test image further by running some commands in it."""
|
||||||
|
self.machine.start()
|
||||||
|
try:
|
||||||
|
self.machine.wait_boot(timeout_sec=120)
|
||||||
|
self.machine.upload([ os.path.join(testvm.SCRIPTS_DIR, "lib") ], "/var/lib/testvm")
|
||||||
|
self.machine.upload([script], "/var/tmp/SETUP")
|
||||||
|
self.machine.upload([ os.path.join(testvm.SCRIPTS_DIR, "lib", "base") ],
|
||||||
|
"/var/tmp/cockpit-base")
|
||||||
|
|
||||||
|
if "rhel" in self.machine.image:
|
||||||
|
self.machine.upload([ os.path.expanduser("~/.rhel") ], "/root/")
|
||||||
|
|
||||||
|
env = {
|
||||||
|
"TEST_OS": self.machine.image,
|
||||||
|
"DO_BUILD": "0" if args.no_build else "1",
|
||||||
|
}
|
||||||
|
self.machine.message("run setup script on guest")
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.machine.execute(script="/var/tmp/SETUP " + self.machine.image,
|
||||||
|
environment=env, quiet=not self.machine.verbose, timeout=7200)
|
||||||
|
self.machine.execute(command="rm -f /var/tmp/SETUP")
|
||||||
|
self.machine.execute(command="rm -rf /root/.rhel")
|
||||||
|
|
||||||
|
if self.machine.image == 'openshift':
|
||||||
|
# update our local openshift kube config file to match the new image
|
||||||
|
self.machine.download("/root/.kube/config", os.path.join(BOTS, "images/files/openshift.kubeconfig"))
|
||||||
|
|
||||||
|
except subprocess.CalledProcessError as ex:
|
||||||
|
if args.sit:
|
||||||
|
sys.stderr.write(self.machine.diagnose())
|
||||||
|
input ("Press RET to continue... ")
|
||||||
|
raise testvm.Failure("setup failed with code {0}\n".format(ex.returncode))
|
||||||
|
|
||||||
|
finally:
|
||||||
|
self.machine.stop(timeout_sec=500)
|
||||||
|
|
||||||
|
def boot_system(self):
|
||||||
|
"""Start the system to make sure it can boot, then shutdown cleanly
|
||||||
|
This also takes care of any selinux relabeling setup triggered
|
||||||
|
Don't wait for an ip address during start, since the system might reboot"""
|
||||||
|
self.machine.start()
|
||||||
|
try:
|
||||||
|
self.machine.wait_boot(timeout_sec=120)
|
||||||
|
finally:
|
||||||
|
self.machine.stop(timeout_sec=120)
|
||||||
|
|
||||||
|
def build(self):
|
||||||
|
self.bootstrap_system()
|
||||||
|
|
||||||
|
# gather the scripts, separated by reboots
|
||||||
|
script = os.path.join(testvm.SCRIPTS_DIR, "%s.setup" % (self.machine.image, ))
|
||||||
|
|
||||||
|
if not os.path.exists(script):
|
||||||
|
return
|
||||||
|
|
||||||
|
self.machine.message("Running setup script %s" % (script))
|
||||||
|
self.run_setup_script(script)
|
||||||
|
|
||||||
|
tries_left = 3
|
||||||
|
successfully_booted = False
|
||||||
|
while tries_left > 0:
|
||||||
|
try:
|
||||||
|
# make sure we can boot the system
|
||||||
|
self.boot_system()
|
||||||
|
successfully_booted = True
|
||||||
|
break
|
||||||
|
except:
|
||||||
|
# we might need to wait for the image to become available again
|
||||||
|
# accessing it in maintain=True mode successively can trigger qemu errors
|
||||||
|
time.sleep(3)
|
||||||
|
tries_left -= 1
|
||||||
|
if not successfully_booted:
|
||||||
|
raise testvm.Failure("Unable to verify that machine boot works.")
|
||||||
|
|
||||||
|
def save(self):
|
||||||
|
data_dir = testvm.get_images_data_dir()
|
||||||
|
|
||||||
|
os.makedirs(data_dir, 0o750, exist_ok=True)
|
||||||
|
|
||||||
|
if not os.path.exists(self.machine.image_file):
|
||||||
|
raise testvm.Failure("Nothing to save.")
|
||||||
|
|
||||||
|
partial = os.path.join(data_dir, self.machine.image + ".partial")
|
||||||
|
|
||||||
|
# Copy image via convert, to make it sparse again
|
||||||
|
subprocess.check_call([ "qemu-img", "convert", "-c", "-O", "qcow2", self.machine.image_file, partial ])
|
||||||
|
|
||||||
|
# Hash the image here
|
||||||
|
(sha, x1, x2) = subprocess.check_output([ "sha256sum", partial ], universal_newlines=True).partition(" ")
|
||||||
|
if not sha:
|
||||||
|
raise testvm.Failure("sha256sum returned invalid output")
|
||||||
|
|
||||||
|
name = self.machine.image + "-" + sha + ".qcow2"
|
||||||
|
data_file = os.path.join(data_dir, name)
|
||||||
|
shutil.move(partial, data_file)
|
||||||
|
|
||||||
|
# Remove temp image file
|
||||||
|
os.unlink(self.machine.image_file)
|
||||||
|
|
||||||
|
# Update the images symlink
|
||||||
|
if os.path.islink(self.target_file):
|
||||||
|
os.unlink(self.target_file)
|
||||||
|
os.symlink(name, self.target_file)
|
||||||
|
|
||||||
|
# Handle alternate images data directory
|
||||||
|
image_file = os.path.join(testvm.IMAGES_DIR, name)
|
||||||
|
if not os.path.exists(image_file):
|
||||||
|
os.symlink(os.path.abspath(data_file), image_file)
|
||||||
|
|
||||||
|
try:
|
||||||
|
testvm.VirtMachine.memory_mb = 2048
|
||||||
|
machine = testvm.VirtMachine(verbose=args.verbose, image=args.image, maintain=True)
|
||||||
|
builder = MachineBuilder(machine)
|
||||||
|
builder.build()
|
||||||
|
if not args.no_save:
|
||||||
|
print("Saving...")
|
||||||
|
builder.save()
|
||||||
|
if args.upload:
|
||||||
|
print("Uploading...")
|
||||||
|
cmd = [ os.path.join(BOTS, "image-upload") ]
|
||||||
|
if args.store:
|
||||||
|
cmd += [ "--store", args.store ]
|
||||||
|
cmd += [ args.image ]
|
||||||
|
subprocess.check_call(cmd)
|
||||||
|
|
||||||
|
except testvm.Failure as ex:
|
||||||
|
sys.stderr.write("image-create: %s\n" % ex)
|
||||||
|
sys.exit(1)
|
||||||
147
bots/image-customize
Executable file
147
bots/image-customize
Executable file
|
|
@ -0,0 +1,147 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2015 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
BOTS = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
BASE = os.path.normpath(os.path.join(BOTS, ".."))
|
||||||
|
TEST = os.path.join(BASE, "test")
|
||||||
|
os.environ["PATH"] = "{0}:{1}".format(os.environ.get("PATH"), BOTS)
|
||||||
|
|
||||||
|
from machine import testvm
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Run command inside or install packages into a Cockpit virtual machine',
|
||||||
|
formatter_class=argparse.ArgumentDefaultsHelpFormatter
|
||||||
|
)
|
||||||
|
parser.add_argument('-v', '--verbose', action='store_true', help='Display verbose progress details')
|
||||||
|
parser.add_argument('-i', '--install', action='append', dest="packagelist", default=[], help='Install packages')
|
||||||
|
parser.add_argument('-I', '--install-command', action='store', dest="installcommand",
|
||||||
|
default="yum --setopt=skip_missing_names_on_install=False -y install",
|
||||||
|
help="Command used to install packages in machine")
|
||||||
|
parser.add_argument('-r', '--run-command', action='append', dest="commandlist",
|
||||||
|
default=[], help='Run command inside virtual machine')
|
||||||
|
parser.add_argument('-s', '--script', action='append', dest="scriptlist",
|
||||||
|
default=[], help='Run selected script inside virtual machine')
|
||||||
|
parser.add_argument('-u', '--upload', action='append', dest="uploadlist",
|
||||||
|
default=[], help='Upload file/dir to destination file/dir separated by ":" example: -u file.txt:/var/lib')
|
||||||
|
parser.add_argument('--base-image', help='Base image name, if "image" does not match a standard Cockpit VM image name')
|
||||||
|
parser.add_argument('--resize', help="Resize the image. Size in bytes with using K, M, or G suffix.")
|
||||||
|
parser.add_argument('image', help='The image to use (destination name when using --base-image)')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if not args.base_image:
|
||||||
|
args.base_image = os.path.basename(args.image)
|
||||||
|
|
||||||
|
args.base_image = testvm.get_test_image(args.base_image)
|
||||||
|
|
||||||
|
# Create the necessary layered image for the build/install
|
||||||
|
def prepare_install_image(base_image, install_image):
|
||||||
|
if "/" not in base_image:
|
||||||
|
base_image = os.path.join(testvm.IMAGES_DIR, base_image)
|
||||||
|
if "/" not in install_image:
|
||||||
|
install_image = os.path.join(os.path.join(TEST, "images"), os.path.basename(install_image))
|
||||||
|
|
||||||
|
# In vm-customize we don't force recreate images
|
||||||
|
if not os.path.exists(install_image):
|
||||||
|
install_image_dir = os.path.dirname(install_image)
|
||||||
|
os.makedirs(install_image_dir, exist_ok=True)
|
||||||
|
base_image = os.path.realpath(base_image)
|
||||||
|
qcow2_image = "{0}.qcow2".format(install_image)
|
||||||
|
subprocess.check_call([ "qemu-img", "create", "-q", "-f", "qcow2",
|
||||||
|
"-o", "backing_file={0},backing_fmt=qcow2".format(base_image), qcow2_image ])
|
||||||
|
if os.path.lexists(install_image):
|
||||||
|
os.unlink(install_image)
|
||||||
|
os.symlink(os.path.basename(qcow2_image), install_image)
|
||||||
|
|
||||||
|
if args.resize:
|
||||||
|
subprocess.check_call(["qemu-img", "resize", install_image, args.resize])
|
||||||
|
|
||||||
|
return install_image
|
||||||
|
|
||||||
|
def run_command(machine_instance, commandlist):
|
||||||
|
"""Run command inside image"""
|
||||||
|
for foo in commandlist:
|
||||||
|
try:
|
||||||
|
machine_instance.execute(foo, timeout=1800)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
sys.stderr.write("%s\n" % e)
|
||||||
|
sys.exit(e.returncode)
|
||||||
|
|
||||||
|
def run_script(machine_instance, scriptlist):
|
||||||
|
"""Run script inside image"""
|
||||||
|
for foo in scriptlist:
|
||||||
|
if os.path.isfile(foo):
|
||||||
|
pname = os.path.basename(foo)
|
||||||
|
uploadpath = "/var/tmp/" + pname
|
||||||
|
machine_instance.upload([os.path.abspath(foo)], uploadpath)
|
||||||
|
machine_instance.execute("chmod a+x %s" % uploadpath)
|
||||||
|
try:
|
||||||
|
machine_instance.execute(uploadpath, timeout=1800)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
sys.stderr.write("%s\n" % e)
|
||||||
|
sys.exit(e.returncode)
|
||||||
|
else:
|
||||||
|
sys.stderr.write("Bad path to script: %s\n" % foo)
|
||||||
|
|
||||||
|
def upload_files(machine_instance, uploadfiles):
|
||||||
|
"""Upload files/directories inside image"""
|
||||||
|
for foo in uploadfiles:
|
||||||
|
srcfile, dest = foo.split(":")
|
||||||
|
src_absolute = os.path.join(os.getcwd(), srcfile)
|
||||||
|
machine_instance.upload([src_absolute], dest)
|
||||||
|
|
||||||
|
def install_packages(machine_instance, packagelist, install_command):
|
||||||
|
"""Install packages into a test image
|
||||||
|
It could be done via local rpms or normal package installation
|
||||||
|
"""
|
||||||
|
allpackages = []
|
||||||
|
for foo in packagelist:
|
||||||
|
if os.path.isfile(foo):
|
||||||
|
pname = os.path.basename(foo)
|
||||||
|
machine_instance.upload([foo], "/var/tmp/" + pname)
|
||||||
|
allpackages.append("/var/tmp/" + pname)
|
||||||
|
elif not re.search("/", foo):
|
||||||
|
allpackages.append(foo)
|
||||||
|
else:
|
||||||
|
sys.stderr.write("Bad package name or path: %s\n" % foo)
|
||||||
|
if allpackages:
|
||||||
|
machine_instance.execute(install_command + " " + ' '.join(allpackages), timeout=1800)
|
||||||
|
|
||||||
|
if args.commandlist or args.packagelist or args.scriptlist or args.uploadlist or args.resize:
|
||||||
|
if '/' not in args.base_image:
|
||||||
|
subprocess.check_call(["image-download", args.base_image])
|
||||||
|
machine = testvm.VirtMachine(maintain=True,
|
||||||
|
verbose=args.verbose, image=prepare_install_image(args.base_image, args.image))
|
||||||
|
machine.start()
|
||||||
|
machine.wait_boot()
|
||||||
|
try:
|
||||||
|
if args.uploadlist:
|
||||||
|
upload_files(machine, args.uploadlist)
|
||||||
|
if args.commandlist:
|
||||||
|
run_command(machine, args.commandlist)
|
||||||
|
if args.packagelist:
|
||||||
|
install_packages(machine, args.packagelist, args.installcommand)
|
||||||
|
if args.scriptlist:
|
||||||
|
run_script(machine, args.scriptlist)
|
||||||
|
finally:
|
||||||
|
machine.stop()
|
||||||
305
bots/image-download
Executable file
305
bots/image-download
Executable file
|
|
@ -0,0 +1,305 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
#
|
||||||
|
# Download images or other state
|
||||||
|
#
|
||||||
|
# Images usually have a name specific link committed to git. These
|
||||||
|
# are referred to as 'committed'
|
||||||
|
#
|
||||||
|
# Other state is simply referenced by name without a link in git
|
||||||
|
# This is referred to as 'state'
|
||||||
|
#
|
||||||
|
# The stores are places to look for images or other state
|
||||||
|
#
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import email
|
||||||
|
import io
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import socket
|
||||||
|
import stat
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
|
import fcntl
|
||||||
|
import urllib.parse
|
||||||
|
|
||||||
|
from machine import testvm
|
||||||
|
from task import REDHAT_STORE
|
||||||
|
|
||||||
|
CONFIG = "~/.config/image-stores"
|
||||||
|
DEFAULT = [
|
||||||
|
"http://cockpit-images.verify.svc.cluster.local",
|
||||||
|
"https://images-cockpit.apps.ci.centos.org/",
|
||||||
|
"https://209.132.184.41:8493/",
|
||||||
|
REDHAT_STORE
|
||||||
|
]
|
||||||
|
|
||||||
|
BOTS = os.path.dirname(os.path.realpath(__file__))
|
||||||
|
|
||||||
|
DEVNULL = open("/dev/null", "r+")
|
||||||
|
EPOCH = "Thu, 1 Jan 1970 00:00:00 GMT"
|
||||||
|
|
||||||
|
def find(name, stores, latest, quiet):
|
||||||
|
found = [ ]
|
||||||
|
ca = os.path.join(testvm.IMAGES_DIR, "files", "ca.pem")
|
||||||
|
|
||||||
|
for store in stores:
|
||||||
|
url = urllib.parse.urlparse(store)
|
||||||
|
|
||||||
|
defport = url.scheme == 'http' and 80 or 443
|
||||||
|
|
||||||
|
try:
|
||||||
|
ai = socket.getaddrinfo(url.hostname, url.port or defport, socket.AF_INET, 0, socket.IPPROTO_TCP)
|
||||||
|
except socket.gaierror:
|
||||||
|
ai = [ ]
|
||||||
|
message = store
|
||||||
|
|
||||||
|
for (family, socktype, proto, canonname, sockaddr) in ai:
|
||||||
|
message = "{scheme}://{0}:{1}{path}".format(*sockaddr, scheme=url.scheme, path=url.path)
|
||||||
|
|
||||||
|
def curl(*args):
|
||||||
|
try:
|
||||||
|
cmd = ["curl"] + list(args) + ["--head", "--silent", "--fail", "--cacert", ca, source]
|
||||||
|
start = time.time()
|
||||||
|
output = subprocess.check_output(cmd, universal_newlines=True)
|
||||||
|
found.append((cmd, output, message, time.time() - start))
|
||||||
|
if not quiet:
|
||||||
|
sys.stderr.write(" > {0}\n".format(message))
|
||||||
|
return True
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# first try with stores that accept the "cockpit-tests" host name
|
||||||
|
resolve = "cockpit-tests:{1}:{0}".format(*sockaddr)
|
||||||
|
source = urllib.parse.urljoin("{0}://cockpit-tests:{1}{2}".format(url.scheme, sockaddr[1], url.path), name)
|
||||||
|
if curl("--resolve", resolve):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# fall back for OpenShift proxied stores which send their own SSL cert initially; host name has to match that
|
||||||
|
source = urllib.parse.urljoin(store, name)
|
||||||
|
if curl():
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not quiet:
|
||||||
|
sys.stderr.write(" x {0}\n".format(message))
|
||||||
|
|
||||||
|
# If we couldn't find the file, but it exists, we're good
|
||||||
|
if not found:
|
||||||
|
return None, None
|
||||||
|
|
||||||
|
# Find the most recent version of this file
|
||||||
|
def header_date(args):
|
||||||
|
cmd, output, message, latency = args
|
||||||
|
try:
|
||||||
|
reply_line, headers_alone = output.split('\n', 1)
|
||||||
|
last_modified = email.message_from_file(io.StringIO(headers_alone)).get("Last-Modified", "")
|
||||||
|
return time.mktime(time.strptime(last_modified, '%a, %d %b %Y %H:%M:%S %Z'))
|
||||||
|
except ValueError:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
if latest:
|
||||||
|
found.sort(reverse=True, key=header_date)
|
||||||
|
else:
|
||||||
|
found.sort(reverse=False, key=lambda x: x[3])
|
||||||
|
|
||||||
|
# Return the command and message
|
||||||
|
return found[0][0], found[0][2]
|
||||||
|
|
||||||
|
def download(dest, force, state, quiet, stores):
|
||||||
|
if not stores:
|
||||||
|
config = os.path.expanduser(CONFIG)
|
||||||
|
if os.path.exists(config):
|
||||||
|
with open(config, 'r') as fp:
|
||||||
|
stores = fp.read().strip().split("\n")
|
||||||
|
else:
|
||||||
|
stores = []
|
||||||
|
stores += DEFAULT
|
||||||
|
|
||||||
|
# The time condition for If-Modified-Since
|
||||||
|
exists = not force and os.path.exists(dest)
|
||||||
|
if exists:
|
||||||
|
since = dest
|
||||||
|
else:
|
||||||
|
since = EPOCH
|
||||||
|
|
||||||
|
name = os.path.basename(dest)
|
||||||
|
cmd, message = find(name, stores, latest=state, quiet=quiet)
|
||||||
|
|
||||||
|
# If we couldn't find the file, but it exists, we're good
|
||||||
|
if not cmd:
|
||||||
|
if exists:
|
||||||
|
return
|
||||||
|
raise RuntimeError("image-download: couldn't find file anywhere: {0}".format(name))
|
||||||
|
|
||||||
|
# Choose the first found item after sorting by date
|
||||||
|
if not quiet:
|
||||||
|
sys.stderr.write(" > {0}\n".format(urllib.parse.urljoin(message, name)))
|
||||||
|
|
||||||
|
temp = dest + ".partial"
|
||||||
|
|
||||||
|
# Adjust the command above that worked to make it visible and download real stuff
|
||||||
|
cmd.remove("--head")
|
||||||
|
cmd.append("--show-error")
|
||||||
|
if not quiet and os.isatty(sys.stdout.fileno()):
|
||||||
|
cmd.remove("--silent")
|
||||||
|
cmd.insert(1, "--progress-bar")
|
||||||
|
cmd.append("--remote-time")
|
||||||
|
cmd.append("--time-cond")
|
||||||
|
cmd.append(since)
|
||||||
|
cmd.append("--output")
|
||||||
|
cmd.append(temp)
|
||||||
|
if os.path.exists(temp):
|
||||||
|
if force:
|
||||||
|
os.remove(temp)
|
||||||
|
else:
|
||||||
|
cmd.append("-C")
|
||||||
|
cmd.append("-")
|
||||||
|
|
||||||
|
# Always create the destination file (because --state)
|
||||||
|
else:
|
||||||
|
open(temp, 'a').close()
|
||||||
|
|
||||||
|
curl = subprocess.Popen(cmd)
|
||||||
|
ret = curl.wait()
|
||||||
|
if ret != 0:
|
||||||
|
raise RuntimeError("curl: unable to download %s (returned: %s)" % (message, ret))
|
||||||
|
|
||||||
|
os.chmod(temp, stat.S_IRUSR | stat.S_IRGRP | stat.S_IROTH)
|
||||||
|
|
||||||
|
# Due to time-cond the file size may be zero
|
||||||
|
# A new file downloaded, put it in place
|
||||||
|
if not exists or os.path.getsize(temp) > 0:
|
||||||
|
shutil.move(temp, dest)
|
||||||
|
|
||||||
|
# Calculate a place to put images where links are not committed in git
|
||||||
|
def state_target(path):
|
||||||
|
data_dir = testvm.get_images_data_dir()
|
||||||
|
os.makedirs(data_dir, mode=0o775, exist_ok=True)
|
||||||
|
return os.path.join(data_dir, path)
|
||||||
|
|
||||||
|
# Calculate a place to put images where links are committed in git
|
||||||
|
def committed_target(image):
|
||||||
|
link = os.path.join(testvm.IMAGES_DIR, image)
|
||||||
|
if not os.path.islink(link):
|
||||||
|
raise RuntimeError("image link does not exist: " + image)
|
||||||
|
|
||||||
|
dest = os.readlink(link)
|
||||||
|
relative_dir = os.path.dirname(os.path.abspath(link))
|
||||||
|
full_dest = os.path.join(relative_dir, dest)
|
||||||
|
while os.path.islink(full_dest):
|
||||||
|
link = full_dest
|
||||||
|
dest = os.readlink(link)
|
||||||
|
relative_dir = os.path.dirname(os.path.abspath(link))
|
||||||
|
full_dest = os.path.join(relative_dir, dest)
|
||||||
|
|
||||||
|
dest = os.path.join(testvm.get_images_data_dir(), dest)
|
||||||
|
|
||||||
|
# We have the file but there is not valid link
|
||||||
|
if os.path.exists(dest):
|
||||||
|
try:
|
||||||
|
os.symlink(dest, os.path.join(testvm.IMAGES_DIR, os.readlink(link)))
|
||||||
|
except FileExistsError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# The image file in the images directory, may be same as dest
|
||||||
|
image_file = os.path.join(testvm.IMAGES_DIR, os.readlink(link))
|
||||||
|
|
||||||
|
# Double check that symlink in place but never make a cycle.
|
||||||
|
if os.path.abspath(dest) != os.path.abspath(image_file):
|
||||||
|
try:
|
||||||
|
os.symlink(os.path.abspath(dest), image_file)
|
||||||
|
except FileExistsError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return dest
|
||||||
|
|
||||||
|
def wait_lock(target):
|
||||||
|
lockfile = os.path.join(tempfile.gettempdir(), ".cockpit-test-resources", os.path.basename(target) + ".lock")
|
||||||
|
os.makedirs(os.path.dirname(lockfile), exist_ok=True)
|
||||||
|
|
||||||
|
# we need to keep the lock fd open throughout the entire runtime, so remember it in a global-scoped variable
|
||||||
|
wait_lock.f = open(lockfile, "w")
|
||||||
|
for retry in range(360):
|
||||||
|
try:
|
||||||
|
fcntl.flock(wait_lock.f, fcntl.LOCK_NB | fcntl.LOCK_EX)
|
||||||
|
return
|
||||||
|
except BlockingIOError:
|
||||||
|
if retry == 0:
|
||||||
|
print("Waiting for concurrent image-download of %s..." % os.path.basename(target))
|
||||||
|
time.sleep(10)
|
||||||
|
else:
|
||||||
|
raise TimeoutError("timed out waiting for concurrent downloads of %s\n" % target)
|
||||||
|
|
||||||
|
def download_images(image_list, force, quiet, state, store):
|
||||||
|
data_dir = testvm.get_images_data_dir()
|
||||||
|
os.makedirs(data_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# A default set of images are all links in git. These links have
|
||||||
|
# no directory part. Other links might exist, such as the
|
||||||
|
# auxiliary links created by committed_target above, and we ignore
|
||||||
|
# them.
|
||||||
|
if not image_list:
|
||||||
|
image_list = []
|
||||||
|
if not state:
|
||||||
|
for filename in os.listdir(testvm.IMAGES_DIR):
|
||||||
|
link = os.path.join(testvm.IMAGES_DIR, filename)
|
||||||
|
if os.path.islink(link) and os.path.dirname(os.readlink(link)) == "":
|
||||||
|
image_list.append(filename)
|
||||||
|
|
||||||
|
success = True
|
||||||
|
|
||||||
|
for image in image_list:
|
||||||
|
image = testvm.get_test_image(image)
|
||||||
|
try:
|
||||||
|
if state:
|
||||||
|
target = state_target(image)
|
||||||
|
else:
|
||||||
|
target = committed_target(image)
|
||||||
|
|
||||||
|
# don't download the same thing multiple times in parallel
|
||||||
|
wait_lock(target)
|
||||||
|
|
||||||
|
if force or state or not os.path.exists(target):
|
||||||
|
download(target, force, state, quiet, store)
|
||||||
|
except Exception as ex:
|
||||||
|
success = False
|
||||||
|
sys.stderr.write("image-download: {0}\n".format(str(ex)))
|
||||||
|
|
||||||
|
return success
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description='Download a bot state or images')
|
||||||
|
parser.add_argument("--force", action="store_true", help="Force unnecessary downloads")
|
||||||
|
parser.add_argument("--store", action="append", help="Where to find state or images")
|
||||||
|
parser.add_argument("--quiet", action="store_true", help="Make downloading quieter")
|
||||||
|
parser.add_argument("--state", action="store_true", help="Images or state not recorded in git")
|
||||||
|
parser.add_argument('image', nargs='*')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if not download_images(args.image, args.force, args.quiet, args.state, args.store):
|
||||||
|
return 1
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
||||||
1
bots/image-naughty
Symbolic link
1
bots/image-naughty
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
tests-policy
|
||||||
219
bots/image-prune
Executable file
219
bots/image-prune
Executable file
|
|
@ -0,0 +1,219 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
# Days after which images expire if not in use
|
||||||
|
IMAGE_EXPIRE = 14
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import urllib
|
||||||
|
import re
|
||||||
|
|
||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
|
from task import github
|
||||||
|
|
||||||
|
from machine import testvm
|
||||||
|
|
||||||
|
BOTS = os.path.dirname(os.path.realpath(__file__))
|
||||||
|
|
||||||
|
# threshold in G below which unreferenced qcow2 images will be pruned, even if they aren't old
|
||||||
|
PRUNE_THRESHOLD_G = float(os.environ.get("PRUNE_THRESHOLD_G", 15))
|
||||||
|
|
||||||
|
def enough_disk_space():
|
||||||
|
"""Check if available disk space in our data store is sufficient
|
||||||
|
"""
|
||||||
|
st = os.statvfs(testvm.get_images_data_dir())
|
||||||
|
free = st.f_bavail * st.f_frsize / (1024*1024*1024)
|
||||||
|
return free >= PRUNE_THRESHOLD_G;
|
||||||
|
|
||||||
|
def get_refs(open_pull_requests=True, offline=False):
|
||||||
|
"""Return dictionary for available refs of the format {'rhel-7.4': 'ad50328990e44c22501bd5e454746d4b5e561b7c'}
|
||||||
|
|
||||||
|
Expects to be called from the top level of the git checkout
|
||||||
|
If offline is true, git show-ref is used instead of listing the remote
|
||||||
|
"""
|
||||||
|
# get all remote heads and filter empty lines
|
||||||
|
# output of ls-remote has the format
|
||||||
|
#
|
||||||
|
# d864d3792db442e3de3d1811fa4bc371793a8f4f refs/heads/master
|
||||||
|
# ad50328990e44c22501bd5e454746d4b5e561b7c refs/heads/rhel-7.4
|
||||||
|
|
||||||
|
refs = { }
|
||||||
|
|
||||||
|
considerable = {}
|
||||||
|
if open_pull_requests:
|
||||||
|
if offline:
|
||||||
|
raise Exception("Unable to consider open pull requests when in offline mode")
|
||||||
|
for p in github.GitHub().pulls():
|
||||||
|
with urllib.request.urlopen(p["patch_url"]) as f:
|
||||||
|
images = []
|
||||||
|
# enough to look at the git commit header, it lists all changed files
|
||||||
|
changed = f.read(4000).decode('utf-8').split("\n")
|
||||||
|
for line in changed:
|
||||||
|
m = re.match("^ bots/images/([^\/]*)\| 2 \+\-$", line)
|
||||||
|
if m:
|
||||||
|
images.append(m.group(1).strip())
|
||||||
|
if images:
|
||||||
|
sha = p["head"]["sha"]
|
||||||
|
considerable[sha] = images
|
||||||
|
subprocess.call(["git", "fetch", "origin", "pull/{0}/head".format(p["number"])])
|
||||||
|
refs["pull request #{} ({})".format(p["number"], p["title"])] = sha
|
||||||
|
|
||||||
|
git_cmd = "show-ref" if offline else "ls-remote"
|
||||||
|
ref_output = subprocess.check_output(["git", git_cmd], universal_newlines=True).splitlines()
|
||||||
|
# filter out the "refs/heads/" prefix and generate a dictionary
|
||||||
|
prefix = "refs/heads"
|
||||||
|
for ln in ref_output:
|
||||||
|
[ref, name] = ln.split()
|
||||||
|
if name.startswith(prefix):
|
||||||
|
refs[name[len(prefix):]] = ref
|
||||||
|
|
||||||
|
return (refs, considerable)
|
||||||
|
|
||||||
|
def get_image_links(ref, git_path):
|
||||||
|
"""Return all image links for the given git ref
|
||||||
|
|
||||||
|
Expects to be called from the top level of the git checkout
|
||||||
|
"""
|
||||||
|
# get all the links we have first
|
||||||
|
# trailing slash on path is important
|
||||||
|
if not git_path.endswith("/"):
|
||||||
|
git_path = "{0}/".format(git_path)
|
||||||
|
|
||||||
|
try:
|
||||||
|
entries = subprocess.check_output(["git", "ls-tree", "--name-only", ref, git_path], universal_newlines=True).splitlines()
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
if e.returncode == 128:
|
||||||
|
sys.stderr.write("Skipping {0} due to tree error.\n".format(ref))
|
||||||
|
return []
|
||||||
|
raise
|
||||||
|
links = [subprocess.check_output(["git", "show", "{0}:{1}".format(ref, entry)], universal_newlines=True) for entry in entries]
|
||||||
|
return [link for link in links if link.endswith(".qcow2")]
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def remember_cwd():
|
||||||
|
curdir = os.getcwd()
|
||||||
|
try:
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
os.chdir(curdir)
|
||||||
|
|
||||||
|
def get_image_names(quiet=False, open_pull_requests=True, offline=False):
|
||||||
|
"""Return all image names used by all branches and optionally in open pull requests
|
||||||
|
"""
|
||||||
|
images = set()
|
||||||
|
# iterate over visible refs (mostly branches)
|
||||||
|
# this hinges on being in the top level directory of the the git checkout
|
||||||
|
with remember_cwd():
|
||||||
|
os.chdir(os.path.join(BOTS, ".."))
|
||||||
|
(refs, considerable) = get_refs(open_pull_requests, offline)
|
||||||
|
# list images present in each branch / pull request
|
||||||
|
for name, ref in refs.items():
|
||||||
|
if not quiet:
|
||||||
|
sys.stderr.write("Considering images from {0} ({1})\n".format(name, ref))
|
||||||
|
for link in get_image_links(ref, "bots/images"):
|
||||||
|
if ref in considerable:
|
||||||
|
for consider in considerable[ref]:
|
||||||
|
if link.startswith(consider):
|
||||||
|
images.add(link)
|
||||||
|
else:
|
||||||
|
images.add(link)
|
||||||
|
|
||||||
|
return images
|
||||||
|
|
||||||
|
def prune_images(force, dryrun, quiet=False, open_pull_requests=True, offline=False, checkout_only=False):
|
||||||
|
"""Prune images
|
||||||
|
"""
|
||||||
|
now = time.time()
|
||||||
|
|
||||||
|
# everything we want to keep
|
||||||
|
if checkout_only:
|
||||||
|
targets = set()
|
||||||
|
else:
|
||||||
|
targets = get_image_names(quiet, open_pull_requests, offline)
|
||||||
|
|
||||||
|
# what we have in the current checkout might already have been added by its branch, but check anyway
|
||||||
|
for filename in os.listdir(testvm.IMAGES_DIR):
|
||||||
|
path = os.path.join(testvm.IMAGES_DIR, filename)
|
||||||
|
|
||||||
|
# only consider original image entries as trustworthy sources and ignore non-links
|
||||||
|
if path.endswith(".qcow2") or path.endswith(".partial") or not os.path.islink(path):
|
||||||
|
continue
|
||||||
|
|
||||||
|
target = os.readlink(path)
|
||||||
|
targets.add(target)
|
||||||
|
|
||||||
|
expiry_threshold = now - IMAGE_EXPIRE * 86400
|
||||||
|
for filename in os.listdir(testvm.get_images_data_dir()):
|
||||||
|
path = os.path.join(testvm.get_images_data_dir(), filename)
|
||||||
|
if not force and (enough_disk_space() and os.lstat(path).st_mtime > expiry_threshold):
|
||||||
|
continue
|
||||||
|
if os.path.isfile(path) and (path.endswith(".xz") or path.endswith(".qcow2") or path.endswith(".partial")) and filename not in targets:
|
||||||
|
if not quiet or dryrun:
|
||||||
|
sys.stderr.write("Pruning {0}\n".format(filename))
|
||||||
|
if not dryrun:
|
||||||
|
os.unlink(path)
|
||||||
|
|
||||||
|
# now prune broken links
|
||||||
|
for filename in os.listdir(testvm.IMAGES_DIR):
|
||||||
|
path = os.path.join(testvm.IMAGES_DIR, filename)
|
||||||
|
|
||||||
|
# don't prune original image entries and ignore non-links
|
||||||
|
if not path.endswith(".qcow2") or not os.path.islink(path):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# if the link isn't valid, prune
|
||||||
|
if not os.path.isfile(path):
|
||||||
|
if not quiet or dryrun:
|
||||||
|
sys.stderr.write("Pruning link {0}\n".format(path))
|
||||||
|
if not dryrun:
|
||||||
|
os.unlink(path)
|
||||||
|
|
||||||
|
def every_image():
|
||||||
|
result = []
|
||||||
|
for filename in os.listdir(testvm.IMAGES_DIR):
|
||||||
|
link = os.path.join(testvm.IMAGES_DIR, filename)
|
||||||
|
if os.path.islink(link):
|
||||||
|
result.append(filename)
|
||||||
|
return result
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description='Prune downloaded images')
|
||||||
|
parser.add_argument("--force", action="store_true", help="Delete images even if they aren't old")
|
||||||
|
parser.add_argument("--quiet", action="store_true", help="Make downloading quieter")
|
||||||
|
parser.add_argument("-d", "--dry-run-prune", dest="dryrun", action="store_true", help="Don't actually delete images and links")
|
||||||
|
parser.add_argument("-b", "--branches-only", dest="branches_only", action="store_true", help="Don't consider pull requests on GitHub, only look at branches")
|
||||||
|
parser.add_argument("-c", "--checkout-only", dest="checkout_only", action="store_true", help="Consider neither pull requests on GitHub nor branches, only look at the current checkout")
|
||||||
|
parser.add_argument("-o", "--offline", dest="offline", action="store_true", help="Don't access external sources such as GitHub")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
try:
|
||||||
|
prune_images(args.force, args.dryrun, quiet=args.quiet, open_pull_requests=(not args.branches_only), offline=args.offline, checkout_only=args.checkout_only)
|
||||||
|
except RuntimeError as ex:
|
||||||
|
sys.stderr.write("image-prune: {0}\n".format(str(ex)))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
||||||
162
bots/image-refresh
Executable file
162
bots/image-refresh
Executable file
|
|
@ -0,0 +1,162 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2016 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
|
||||||
|
import task
|
||||||
|
from task import github, REDHAT_STORE
|
||||||
|
|
||||||
|
TRIGGERS = {
|
||||||
|
"centos-7": [
|
||||||
|
"centos-7@cockpit-project/starter-kit",
|
||||||
|
],
|
||||||
|
"continuous-atomic": [
|
||||||
|
"continuous-atomic@cockpit-project/cockpit-ostree",
|
||||||
|
],
|
||||||
|
"debian-testing": [
|
||||||
|
"debian-testing"
|
||||||
|
],
|
||||||
|
"debian-stable": [
|
||||||
|
"debian-stable"
|
||||||
|
],
|
||||||
|
"fedora-29": [
|
||||||
|
"fedora-atomic",
|
||||||
|
"fedora-29@cockpit-project/cockpit-podman",
|
||||||
|
],
|
||||||
|
"fedora-30": [
|
||||||
|
"fedora-30",
|
||||||
|
"fedora-30/selenium-chrome",
|
||||||
|
"fedora-30/selenium-firefox",
|
||||||
|
"fedora-30/selenium-edge",
|
||||||
|
"fedora-30/container-bastion",
|
||||||
|
"fedora-30@cockpit-project/starter-kit",
|
||||||
|
"fedora-30@cockpit-project/cockpit-podman",
|
||||||
|
"fedora-30@weldr/lorax",
|
||||||
|
"fedora-30/live-iso@weldr/lorax",
|
||||||
|
"fedora-30/qcow2@weldr/lorax",
|
||||||
|
"fedora-30/chrome@weldr/cockpit-composer",
|
||||||
|
"fedora-30/firefox@weldr/cockpit-composer",
|
||||||
|
"fedora-30/edge@weldr/cockpit-composer",
|
||||||
|
],
|
||||||
|
"fedora-atomic": [
|
||||||
|
"fedora-atomic",
|
||||||
|
"fedora-atomic@cockpit-project/cockpit-ostree",
|
||||||
|
],
|
||||||
|
"fedora-testing": [
|
||||||
|
"fedora-testing"
|
||||||
|
],
|
||||||
|
"fedora-i386": [
|
||||||
|
"fedora-i386"
|
||||||
|
],
|
||||||
|
"ubuntu-1804": [
|
||||||
|
"ubuntu-1804"
|
||||||
|
],
|
||||||
|
"ubuntu-stable": [
|
||||||
|
"ubuntu-stable"
|
||||||
|
],
|
||||||
|
"openshift": [
|
||||||
|
# FIXME: need to test a rhel-7.x branch here, once we can
|
||||||
|
],
|
||||||
|
"ipa": [
|
||||||
|
"fedora-30",
|
||||||
|
"ubuntu-1804",
|
||||||
|
"debian-stable"
|
||||||
|
],
|
||||||
|
"selenium": [
|
||||||
|
"fedora-30/selenium-chrome",
|
||||||
|
"fedora-30/selenium-firefox",
|
||||||
|
],
|
||||||
|
"rhel-7-7": [
|
||||||
|
"rhel-7-7/firefox@weldr/cockpit-composer",
|
||||||
|
"rhel-7-7@cockpit-project/cockpit/rhel-7.7",
|
||||||
|
],
|
||||||
|
"rhel-8-0": [
|
||||||
|
"rhel-8-0",
|
||||||
|
"rhel-8-0-distropkg",
|
||||||
|
],
|
||||||
|
"rhel-8-1": [
|
||||||
|
"rhel-8-1",
|
||||||
|
"rhel-8-1@cockpit-project/cockpit/rhel-8.1",
|
||||||
|
"rhel-8-1@cockpit-project/cockpit/rhel-8-appstream",
|
||||||
|
"rhel-8-1/chrome@weldr/cockpit-composer",
|
||||||
|
"rhel-8-1@cockpit-project/cockpit-podman",
|
||||||
|
],
|
||||||
|
"rhel-atomic": [
|
||||||
|
"rhel-atomic@cockpit-project/cockpit-ostree",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
STORES = {
|
||||||
|
"rhel-7-7": REDHAT_STORE,
|
||||||
|
"rhel-8-0": REDHAT_STORE,
|
||||||
|
"rhel-8-1": REDHAT_STORE,
|
||||||
|
"rhel-atomic": REDHAT_STORE,
|
||||||
|
"windows-10": REDHAT_STORE,
|
||||||
|
}
|
||||||
|
|
||||||
|
BOTS = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
BASE = os.path.normpath(os.path.join(BOTS, ".."))
|
||||||
|
|
||||||
|
sys.dont_write_bytecode = True
|
||||||
|
|
||||||
|
def run(image, verbose=False, **kwargs):
|
||||||
|
if not image:
|
||||||
|
raise RuntimeError("no image specified")
|
||||||
|
|
||||||
|
triggers = TRIGGERS.get(image, [ ])
|
||||||
|
store = STORES.get(image, None)
|
||||||
|
|
||||||
|
# Cleanup any extraneous disk usage elsewhere
|
||||||
|
subprocess.check_call([ os.path.join(BOTS, "vm-reset") ])
|
||||||
|
|
||||||
|
cmd = [ os.path.join(BOTS, "image-create"), "--verbose", "--upload" ]
|
||||||
|
if store:
|
||||||
|
cmd += [ "--store", store ]
|
||||||
|
cmd += [ image ]
|
||||||
|
|
||||||
|
os.environ['VIRT_BUILDER_NO_CACHE'] = "yes"
|
||||||
|
ret = subprocess.call(cmd)
|
||||||
|
if ret:
|
||||||
|
return ret
|
||||||
|
|
||||||
|
branch = task.branch(image, "images: Update {0} image".format(image), pathspec="bots/images", **kwargs)
|
||||||
|
if branch:
|
||||||
|
pull = task.pull(branch, run_tests=False, **kwargs)
|
||||||
|
|
||||||
|
# Trigger this pull request
|
||||||
|
api = github.GitHub()
|
||||||
|
head = pull["head"]["sha"]
|
||||||
|
for trigger in triggers:
|
||||||
|
api.post("statuses/{0}".format(head), { "state": "pending", "context": trigger,
|
||||||
|
"description": github.NOT_TESTED_DIRECT })
|
||||||
|
|
||||||
|
# Wait until all of the statuses are present so the no-test label can
|
||||||
|
# safely be removed by the task api
|
||||||
|
for retry in range(20):
|
||||||
|
if all(status in triggers for status in api.statuses(head).keys()):
|
||||||
|
break
|
||||||
|
time.sleep(6)
|
||||||
|
else:
|
||||||
|
raise RuntimeError("Failed to confirm the presence of all triggers")
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
task.main(function=run, title="Refresh image")
|
||||||
110
bots/image-trigger
Executable file
110
bots/image-trigger
Executable file
|
|
@ -0,0 +1,110 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2015 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
DAYS = 7
|
||||||
|
|
||||||
|
REFRESH = {
|
||||||
|
"candlepin": { "refresh-days": 120 },
|
||||||
|
"centos-7": { },
|
||||||
|
"continuous-atomic": { },
|
||||||
|
"debian-testing": { },
|
||||||
|
"debian-stable": { },
|
||||||
|
"fedora-29": { },
|
||||||
|
"fedora-30": { },
|
||||||
|
"fedora-atomic": { },
|
||||||
|
"fedora-testing": { },
|
||||||
|
"fedora-i386": { },
|
||||||
|
"ipa": { "refresh-days": 120 },
|
||||||
|
"ubuntu-1804": { },
|
||||||
|
"ubuntu-stable": { },
|
||||||
|
"openshift": { "refresh-days": 30 },
|
||||||
|
'rhel-7-7': { },
|
||||||
|
'rhel-8-0': { },
|
||||||
|
'rhel-8-1': { },
|
||||||
|
'rhel-atomic': { },
|
||||||
|
"selenium": { "refresh-days": 30 },
|
||||||
|
}
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
sys.dont_write_bytecode = True
|
||||||
|
|
||||||
|
import task
|
||||||
|
from task import github
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description='Ensure necessary issue exists for image refresh')
|
||||||
|
parser.add_argument('-v', '--verbose', action="store_true", default=False,
|
||||||
|
help="Print verbose information")
|
||||||
|
parser.add_argument("image", nargs="?")
|
||||||
|
opts = parser.parse_args()
|
||||||
|
api = github.GitHub()
|
||||||
|
|
||||||
|
try:
|
||||||
|
results = scan(api, opts.image, opts.verbose)
|
||||||
|
except RuntimeError as ex:
|
||||||
|
sys.stderr.write("image-trigger: " + str(ex) + "\n")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
for result in results:
|
||||||
|
if result:
|
||||||
|
sys.stdout.write(result + "\n")
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# Prepare an image prune command
|
||||||
|
def scan_for_prune():
|
||||||
|
tasks = [ ]
|
||||||
|
stamp = os.path.join(tempfile.gettempdir(), "cockpit-image-prune.stamp")
|
||||||
|
|
||||||
|
# Don't prune more than once per hour
|
||||||
|
try:
|
||||||
|
mtime = os.stat(stamp).st_mtime
|
||||||
|
except OSError:
|
||||||
|
mtime = 0
|
||||||
|
if mtime < time.time() - 3600:
|
||||||
|
tasks.append("PRIORITY=0000 touch {0} && bots/image-prune".format(stamp))
|
||||||
|
|
||||||
|
return tasks
|
||||||
|
|
||||||
|
def scan(api, force, verbose):
|
||||||
|
subprocess.check_call([ "git", "fetch", "origin", "master" ])
|
||||||
|
for (image, options) in REFRESH.items():
|
||||||
|
perform = False
|
||||||
|
|
||||||
|
if force:
|
||||||
|
perform = image == force
|
||||||
|
else:
|
||||||
|
days = options.get("refresh-days", DAYS)
|
||||||
|
perform = task.stale(days, os.path.join("bots", "images", image), "origin/master")
|
||||||
|
|
||||||
|
if perform:
|
||||||
|
text = "Image refresh for {0}".format(image)
|
||||||
|
issue = task.issue(text, text, "image-refresh", image)
|
||||||
|
sys.stderr.write("#{0}: image-refresh {1}\n".format(issue["number"], image))
|
||||||
|
|
||||||
|
return scan_for_prune()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
||||||
120
bots/image-upload
Executable file
120
bots/image-upload
Executable file
|
|
@ -0,0 +1,120 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
|
||||||
|
# The default settings here should match one of the default download stores
|
||||||
|
DEFAULT_UPLOAD = [
|
||||||
|
"https://images-cockpit.apps.ci.centos.org/",
|
||||||
|
"https://209.132.184.41:8493/",
|
||||||
|
]
|
||||||
|
|
||||||
|
TOKEN = "~/.config/github-token"
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import getpass
|
||||||
|
import errno
|
||||||
|
import os
|
||||||
|
import socket
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import urllib.parse
|
||||||
|
|
||||||
|
from machine import testvm
|
||||||
|
|
||||||
|
BOTS = os.path.dirname(__file__)
|
||||||
|
|
||||||
|
def upload(store, source):
|
||||||
|
ca = os.path.join(BOTS, "images", "files", "ca.pem")
|
||||||
|
url = urllib.parse.urlparse(store)
|
||||||
|
|
||||||
|
# Start building the command
|
||||||
|
cmd = ["curl", "--progress-bar", "--cacert", ca, "--fail", "--upload-file", source ]
|
||||||
|
|
||||||
|
def try_curl(cmd):
|
||||||
|
print("Uploading to", cmd[-1])
|
||||||
|
# Passing through a non terminal stdout is necessary to make progress work
|
||||||
|
curl = subprocess.Popen(cmd, stdout=subprocess.PIPE)
|
||||||
|
cat = subprocess.Popen(["cat"], stdin=curl.stdout)
|
||||||
|
curl.stdout.close()
|
||||||
|
ret = curl.wait()
|
||||||
|
cat.wait()
|
||||||
|
if ret != 0:
|
||||||
|
sys.stderr.write("image-upload: unable to upload image: {0}\n".format(cmd[-1]))
|
||||||
|
return ret
|
||||||
|
|
||||||
|
# Parse the user name and token, if present
|
||||||
|
user = url.username or getpass.getuser()
|
||||||
|
try:
|
||||||
|
with open(os.path.expanduser(TOKEN), "r") as gt:
|
||||||
|
token = gt.read().strip()
|
||||||
|
cmd += [ "--user", user + ":" + token ]
|
||||||
|
except IOError as exc:
|
||||||
|
if exc.errno == errno.ENOENT:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# First try to use the original store URL, for stores with valid SSL cert on an OpenShift proxy
|
||||||
|
if try_curl(cmd + [store]) == 0:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# Fall back for stores that use our self-signed cockpit certificate
|
||||||
|
# Parse out the actual address to connect to and override certificate info
|
||||||
|
defport = url.scheme == 'http' and 80 or 443
|
||||||
|
ai = socket.getaddrinfo(url.hostname, url.port or defport, socket.AF_INET, 0, socket.IPPROTO_TCP)
|
||||||
|
for (family, socktype, proto, canonname, sockaddr) in ai:
|
||||||
|
resolve = "cockpit-tests:{1}:{0}".format(*sockaddr)
|
||||||
|
curl_url = "https://cockpit-tests:{0}{1}".format(url.port or defport, url.path)
|
||||||
|
ret = try_curl(cmd + ["--resolve", resolve, curl_url])
|
||||||
|
if ret == 0:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description='Upload bot state or images')
|
||||||
|
parser.add_argument("--store", action="append", default=[], help="Where to send state or images")
|
||||||
|
parser.add_argument("--state", action="store_true", help="Images or state not recorded in git")
|
||||||
|
parser.add_argument('image', nargs='*')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
data_dir = testvm.get_images_data_dir()
|
||||||
|
sources = []
|
||||||
|
for image in args.image:
|
||||||
|
if args.state:
|
||||||
|
source = os.path.join(data_dir, image)
|
||||||
|
else:
|
||||||
|
link = os.path.join(testvm.IMAGES_DIR, image)
|
||||||
|
if not os.path.islink(link):
|
||||||
|
parser.error("image link does not exist: " + image)
|
||||||
|
source = os.path.join(data_dir, os.readlink(link))
|
||||||
|
if not os.path.isfile(source):
|
||||||
|
parser.error("image does not exist: " + image)
|
||||||
|
sources.append(source)
|
||||||
|
|
||||||
|
for source in sources:
|
||||||
|
for store in (args.store or DEFAULT_UPLOAD):
|
||||||
|
ret = upload(store, source)
|
||||||
|
if ret == 0:
|
||||||
|
return ret
|
||||||
|
else:
|
||||||
|
# all stores failed, so return last exit code
|
||||||
|
return ret
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
||||||
1
bots/images/candlepin
Symbolic link
1
bots/images/candlepin
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
candlepin-3a39cecb7d2fea2e75b0093a891b3c476141406e20f332cb2a12f2dfb6e9d275.qcow2
|
||||||
1
bots/images/centos-7
Symbolic link
1
bots/images/centos-7
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
centos-7-3d4864aef14eb0fc7ca59857c99d75aadf22ea39286d56886e55f408dabe6943.qcow2
|
||||||
1
bots/images/cirros
Symbolic link
1
bots/images/cirros
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
cirros-d5fcb44e05f2dafc7eaab6bce906ba9cc06af51f84f1e7a527fe12102e34bbcf.qcow2
|
||||||
1
bots/images/continuous-atomic
Symbolic link
1
bots/images/continuous-atomic
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
continuous-atomic-dbc11a3d5baae076e743c572673c8675500eafcc7a8ac73f35e3dbac2871f611.qcow2
|
||||||
1
bots/images/debian-stable
Symbolic link
1
bots/images/debian-stable
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
debian-stable-20f723ddf309888c23b2e3c1269d49f73998ebe7b93e2ce8ef956fc75b82978e.qcow2
|
||||||
1
bots/images/debian-testing
Symbolic link
1
bots/images/debian-testing
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
debian-testing-67a76310b5690cb438eea9871943d1ed62bf4b58ab82f0fa3916036fed5fd4d6.qcow2
|
||||||
1
bots/images/fedora-23-stock
Symbolic link
1
bots/images/fedora-23-stock
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-23-stock-1a7ce615dcf1772ff6514148513fc88e420b9179f32c5395e3a27dab3b107dcc.qcow2
|
||||||
1
bots/images/fedora-29
Symbolic link
1
bots/images/fedora-29
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-29-7dffa701d72a40e18bbe60d6abd2b28074601e4830f62d24e70ea14de6b59714.qcow2
|
||||||
1
bots/images/fedora-30
Symbolic link
1
bots/images/fedora-30
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-30-6169ef919387b02fee781d978026ca00fb90d797d34362ee05aef74bfb33f7ce.qcow2
|
||||||
1
bots/images/fedora-atomic
Symbolic link
1
bots/images/fedora-atomic
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-atomic-9b7a5c5c6f4f71bae65d3e6de050325f849ac68a4de9a43382eddd251bb08d29.qcow2
|
||||||
1
bots/images/fedora-i386
Symbolic link
1
bots/images/fedora-i386
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-i386-f5c6c9730facd6b7d00d5c07f59cf7bf3a9ce3de1270f174cf5d9aefcd86a297.qcow2
|
||||||
1
bots/images/fedora-stock
Symbolic link
1
bots/images/fedora-stock
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
stock-fedora-22-x86_64-2.qcow2
|
||||||
1
bots/images/fedora-testing
Symbolic link
1
bots/images/fedora-testing
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-testing-72c693493fcbf66cb9ed70b1ceebd7b76ce32972bb1c00a90d1246e15a2ca62d.qcow2
|
||||||
21
bots/images/files/ca.pem
Normal file
21
bots/images/files/ca.pem
Normal file
|
|
@ -0,0 +1,21 @@
|
||||||
|
# This is the CA for cockpit-tests images and data
|
||||||
|
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIIDDDCCAfSgAwIBAgIJANdoyGJiUz+8MA0GCSqGSIb3DQEBCwUAMDUxEDAOBgNV
|
||||||
|
BAoMB0NvY2twaXQxFDASBgNVBAsMC0NvY2twaXR1b3VzMQswCQYDVQQDDAJDQTAg
|
||||||
|
Fw0xOTAyMDcxMDE4NDNaGA8zMDE4MDYxMDEwMTg0M1owNTEQMA4GA1UECgwHQ29j
|
||||||
|
a3BpdDEUMBIGA1UECwwLQ29ja3BpdHVvdXMxCzAJBgNVBAMMAkNBMIIBIjANBgkq
|
||||||
|
hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnvIZetd5yEhdE0c/9lYp1mC4M6qiu6E2
|
||||||
|
wVMbJLwsOuCyCSaZs5eDap1kremHz7ms+Fq07TUsN/o5U7PBnNgM3z6Zbv78QN6R
|
||||||
|
wn6ovLHfCyVqpg0nPMh3Hzpd0HDZQ+3eBayL2xfmBhU8p1+/vWVBOe49SDO15YDM
|
||||||
|
/Ian7I/HRsnprz5PH3atquSf+B8/Q+lgbO0dHKhXlbnTsSy/Esee82HhYrDlxD3p
|
||||||
|
Ow7EcZ7HACh/2dvF70BQpjnxTEc//4LNgP7hiqk4phsGzM/9QSFHW8ol4XlBDUi0
|
||||||
|
F5nNXZTs3jKITTOeda5mppuKoZoC+7iFk8dLvV0Y187xD38X2XgGnwIDAQABox0w
|
||||||
|
GzAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEA
|
||||||
|
PHaVKb97ZN2m/sEVU+TGepVhCZ15frIaCJRuBPEs5rwcJjIctyRF4H6R6ec2b2lB
|
||||||
|
6ni9eqU6pPgS+rVJPsxqCpelQiCZALR7FYoA6+FtfpLkB5+zwJUfexr7Q6I7llWI
|
||||||
|
8OBOmtEADRv//2D+Iu6mM6nkzUK1K/wCcFS//roLjK/nKH2xd2lWbYk2Ro+nTPIm
|
||||||
|
slwgk6fAUXQcd5v/XqrySZ5jny73jMqo7SRVC5suNuAfiT0/YGvE5N99+I5AkD5I
|
||||||
|
R/R80/w1bDExfcqtx5UPBitMG2bx/gA07k4XbAGsEH5zvIdgsV9S5uYQEDjIRZys
|
||||||
|
ScLMpNOd3JyD7ncvr6Ga6g==
|
||||||
|
-----END CERTIFICATE-----
|
||||||
37
bots/images/files/openshift.kubeconfig
Normal file
37
bots/images/files/openshift.kubeconfig
Normal file
|
|
@ -0,0 +1,37 @@
|
||||||
|
apiVersion: v1
|
||||||
|
clusters:
|
||||||
|
- cluster:
|
||||||
|
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2akNDQWRLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFtTVNRd0lnWURWUVFEREJ0dmNHVnUKYzJocFpuUXRjMmxuYm1WeVFERTFOak15TXpFME1UVXdIaGNOTVRrd056RTFNakkxTmpVMVdoY05NalF3TnpFegpNakkxTmpVMldqQW1NU1F3SWdZRFZRUUREQnR2Y0dWdWMyaHBablF0YzJsbmJtVnlRREUxTmpNeU16RTBNVFV3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURNMDNBMFpxOGE3UHhYTTNIcjVEQ2oKdWtreDl1ZVUzT2dTRlB0Q2tWcFpkNXkwbXlnTXYyK2ZUVko2eXRLRXBXNXFJVkZiTFVmWlZZdWxmZjhTUVVHdwpVbXNoQTRQa3Y4MjVscVZjKzdwVitlRkRvTU42L2hrNUFWMkt0WTh3QUl4T2gzclZER2E3N3dGUlRQMVBmeG9YCnZHNElpd3dCZElGaXpNdEp2dThvWHB4dUpUZkJjN3ZldXlPT1NxMEFXaTZQRER0Vkdka252K3hyMFcyeEJBS1oKM2tsWmY2Tnp0WTRIcGR5YUNXYUw1MVhTZzY0TlNoc2VPUytHRUhJVkJPREJtUTNJTDBQRTh4WGhlbldkSFA0RQpOaElSU21JcFNrdC95M3RYWTBqRDRjNDdXaEpTeTh2VEFlT1phaTZSVU1LZTNKWlZxWXF5czVzbnVtQzdYNDRwCkFnTUJBQUdqSXpBaE1BNEdBMVVkRHdFQi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQmVFTmJIWVFqTVRFdmk4azhuNnczeTZxOWRqYnFMd29CNjJva2lVYy9hbGJvMQpHTDJkNDh6OTExYXd2bmE4UDJMNENSTEFRTDBpOFA4WkozN2I5VFlWU2JBNHE2emNiWCtuMlZlbmppd1JRRzZiCkkzNmI4OW4wWnRZdFU2VTBXY2hMc0p0VThybG5XUlhraEpnZERZeFR2elNrMGRxZEJ1UDdkTDROa1hJMlluNTAKeExHWjc0Ti9USngzRy9NN0tFcWoxVWh6cXprYWNPR3RCcVB6L1cxQXJMWFNwaGNrdHZiaGU5Q0hWSG5IaFMvMApYZWZiWjk4Vll6MHBCMWxObkdqTWx5TGlzclBMMUJteDk0VzBLL25RN1hHSmRKbk1ZckdHbWF2SWJnOUVqbVNxCkgyOEVMOTVUZ2xkVUJSa1ZmbVZRc1pTTHpta3JjSFZLWTFvMnVibUwKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
|
||||||
|
server: https://10.111.112.101:8443
|
||||||
|
name: 10-111-112-101:8443
|
||||||
|
contexts:
|
||||||
|
- context:
|
||||||
|
cluster: 10-111-112-101:8443
|
||||||
|
user: scruffy/10-111-112-101:8443
|
||||||
|
name: /10-111-112-101:8443/scruffy
|
||||||
|
- context:
|
||||||
|
cluster: 10-111-112-101:8443
|
||||||
|
namespace: default
|
||||||
|
user: system:admin/10-111-112-101:8443
|
||||||
|
name: default/10-111-112-101:8443/system:admin
|
||||||
|
- context:
|
||||||
|
cluster: 10-111-112-101:8443
|
||||||
|
namespace: marmalade
|
||||||
|
user: scruffy/10-111-112-101:8443
|
||||||
|
name: marmalade/10-111-112-101:8443/scruffy
|
||||||
|
- context:
|
||||||
|
cluster: 10-111-112-101:8443
|
||||||
|
namespace: pizzazz
|
||||||
|
user: scruffy/10-111-112-101:8443
|
||||||
|
name: pizzazz/10-111-112-101:8443/scruffy
|
||||||
|
current-context: default/10-111-112-101:8443/system:admin
|
||||||
|
kind: Config
|
||||||
|
preferences: {}
|
||||||
|
users:
|
||||||
|
- name: scruffy/10-111-112-101:8443
|
||||||
|
user:
|
||||||
|
token: pnHabWrkS-QNwczCj3dGg54ds8ck3NTuimQ-3PXSwl8
|
||||||
|
- name: system:admin/10-111-112-101:8443
|
||||||
|
user:
|
||||||
|
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKRENDQWd5Z0F3SUJBZ0lCQ0RBTkJna3Foa2lHOXcwQkFRc0ZBREFtTVNRd0lnWURWUVFEREJ0dmNHVnUKYzJocFpuUXRjMmxuYm1WeVFERTFOak15TXpFME1UVXdIaGNOTVRrd056RTFNakkxTmpVNFdoY05NakV3TnpFMApNakkxTmpVNVdqQk9NVFV3RlFZRFZRUUtFdzV6ZVhOMFpXMDZiV0Z6ZEdWeWN6QWNCZ05WQkFvVEZYTjVjM1JsCmJUcGpiSFZ6ZEdWeUxXRmtiV2x1Y3pFVk1CTUdBMVVFQXhNTWMzbHpkR1Z0T21Ga2JXbHVNSUlCSWpBTkJna3EKaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUE0WC9mSU16dThMY3JTRTZXb3hWUXZNWWgyRHZRTjEvNQp2SXZPRndVVFpTWERPcFc1Ly9tSHg0TFFid29mOXMyQUJjZzQ4N3c0UjhiMzY0blZhVVJGWVVnNHlycm8xOWpCCjJzZkZKbjd0UDgrVC9JNXB5WFY1SVBWMXFDN2ozNXMxUlhXb25icElwVzU5WHZPYU9hdm5CaDFHc3RaYW1VTjEKL1pEUXE4TlRuVXg1aEozWjZPSGx4bFNhUXhQZk9IbituRVZNZTNMUTNjeitydkNMalVKcVY4b05aTlpqUEVTZwpWY0dqb2dobW15MVZkNUhOeGV6WGdCYVZ0VDRkTWc0Ym5HRTBnT1B3OFdpUTNwNUY4M0RibXVUck5oYzBNdmhLClBIa0ZkWVBmeWpVMlNCSjY3aFltVmY4SXhFQVllT3VsOVdLWFFYditwRzZHS3pURkdsVTc1d0lEQVFBQm96VXcKTXpBT0JnTlZIUThCQWY4RUJBTUNCYUF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdJd0RBWURWUjBUQVFILwpCQUl3QURBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWgzNHpFYVZ1UkhQczBqMHE5b1ZSSHZnNkJnQ09BWXcxCjBjNjdKeXBQeUZOaHVYS3d6WnNYK1FMRXZUKzdITUpwRzN2ZTJ2OU0wbTRsNjdWK0pXeTBiczczb0hrTTRCM28KOUxJK2hocTJaOUtLMVJQM0NHUVZZdDdTWmpuUzk3Nk55anR2OXVHY3h2aGZtZWlOY0Q0MVd2ZHZvRkthc2I5Sgp3Y2JDb0UwTFdMdENXdHFHbGo3WGF0c0FCL0dpK05GeEtnRWRZcEU5K0UwaXRCSzdIVzJaSHJCV0NMRmc5Mnl5ClhtZUdvVDZVeGg1MEZFSVpsWmtIdWxTckN4eXpwZUx6QUJrKzdYZmNjdm1NK04vOS9MV0pYK1JPZ3NEYm10ZUoKQzVtZUY3ZkhNemV6OXI3Zk9hSGg0YWVDU0NGb0JvbTlvM1c0WVdOOHpWaUJRbkJLSXZxU3BBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
|
||||||
|
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBNFgvZklNenU4TGNyU0U2V294VlF2TVloMkR2UU4xLzV2SXZPRndVVFpTWERPcFc1Ci8vbUh4NExRYndvZjlzMkFCY2c0ODd3NFI4YjM2NG5WYVVSRllVZzR5cnJvMTlqQjJzZkZKbjd0UDgrVC9JNXAKeVhWNUlQVjFxQzdqMzVzMVJYV29uYnBJcFc1OVh2T2FPYXZuQmgxR3N0WmFtVU4xL1pEUXE4TlRuVXg1aEozWgo2T0hseGxTYVF4UGZPSG4rbkVWTWUzTFEzY3orcnZDTGpVSnFWOG9OWk5aalBFU2dWY0dqb2dobW15MVZkNUhOCnhlelhnQmFWdFQ0ZE1nNGJuR0UwZ09QdzhXaVEzcDVGODNEYm11VHJOaGMwTXZoS1BIa0ZkWVBmeWpVMlNCSjYKN2hZbVZmOEl4RUFZZU91bDlXS1hRWHYrcEc2R0t6VEZHbFU3NXdJREFRQUJBb0lCQVFESWJTbGJOQXNrTlFucAphTUNIRDBrRm9HMHdqbWxRN3FOQUxGcnZKdm5JS3pwTTlndXVNcEcyaU5UTi9RZlFDM05Bc0dlK2E0cnljU3ltClU0bzEyQko2bHdDellGSFlsN1lseU8yNGU1UlA1U1k1a2pNQWRzTkV3aWJqWjFudXd6c2tFNkhkSDFlMmduQTQKVnZpN1RjazNMQXBNcGkwOGtETnRQcXZhSHZCUW01ODZJVXFIZW1HL3pKQlBWZCtoZ2EwdjhlWFVZSlFuZE1iWApQa2N1Q0ovYnI4a2pGaGhac2k0YjBmK3lubHB6WmdwZFhqeExtNmJhaC9wOFYwZVMyeGlzeDdVMkhJMFZ3UUZxCmwxMUhzWk81WW1jdGpVMVR0L1FkSk95OG9yWjYrb3cxQ2JEL3BySlJpM2c2K2JDVFR0N2RDU2wydmxJWlZCR0gKeHpLdzFSdlJBb0dCQU8xTkFLSXp4RkoyNGtvSlNRRWErMmRIeVdCMEdMRVB6WE9kNzJxZnFscWRhOWIxRW93awpRcUF2OVBqay83SUlpVTZXTXhOVTFJVUd3aitnbnlEbXVvUmZZTkRaZ3RxZGVCUmFycWQyeUhjNi95a2U4K2tpCnNDY0dheStIVTVodndyUWcwdEhSSE9DTjJhTkw4RFpjZjh4N0hndHVtdXYzWWo1VEo5VXNEd2NMQW9HQkFQTkUKemlFZmxNYWtNZ2lwRS9uaWRRWmZPR2FIa01zNG83bUhXQkRRc2I3VGhGUjNsMWxtNlI0bjJ4V01VajU3K28zQQppYkdlRzNlRFQ1WFpNZVdkd04zTE14amFzYzU2dzFyY2crSmgraTdNRWw5Skd1NHJhUE5DTGVSb3M2dkpLR2Z4CnVvZ2FHYy9yY0FvYm5jRFBya3lFZ2ZVbXNKN1VTeElLK1pBTE5mZ1ZBb0dCQUtPMTFQTVNIYVg2cUlFRlNOMC8KWFNQU2phWkNVZXFOaVdMekdZSUlwd0VleTVBZndPejM4eE1LSXNvM1NnUHNDYll5dndmZUpVT2s5d3ZvWnYvTwp6ZXlXMUhjaEtEcGtHcnlJRnlnbk5ZTzBLdWFXbVJWRXZod2VQSUlzclVwa0NBSTNCdHFEbHBXQXB4NFdQS0YwClRTS242WUZmaS9ldzBwRkcweHNvNnpFakFvR0FIMHlLK05nSFhFZGo2SmxZYUo0cVVGZVAraUVYRUE2SmdpVlgKdjFJYWpHTEtjOU92TldGNFBOa0Q1eEhXd3hOUWVVeDhhczNjMnRPYU9iMW9IaExkN2F0bk41dHJwUlZHYlRwUgovWjU5Z2VmZnRVTENwRUlSanJyRkRNNHJ6NzVoNUgzRmNoMXBsTWJGODRiNkZRU2plRlRVSTZhR3N1aTlmK1RKCmx5N2FFc0VDZ1lFQXpNdzNzZGFrRmtRRDNWR3Zudjl3ZytJOXY5VmZNazFKMDJpdFkxUWRSWGFlSjU1c2FoQ2EKMWtGKzh5OEJZVUg4TUJkd0FIRzNpSWJKcDRoRHZKendhTlBsVWRkcEVCWlJPYU9kY2M2TEVVYytWUUNXZzlObgpqWERUY0NzSWk3Z05Sa2lxdnNCaUdvUzhoNmtwakpTeTYxRTFVbWhwZWkreFNCLythcm03U2d3PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
|
||||||
1
bots/images/ipa
Symbolic link
1
bots/images/ipa
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
ipa-fd92f013474c1625144b2c18424dffdc9386de5c2e493d4b0257f8ee725c177a.qcow2
|
||||||
1
bots/images/openshift
Symbolic link
1
bots/images/openshift
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
openshift-724bba0e96ba6fc8cfb4bb4fb8f814f9efb570b3109072c7a04091cb31986935.qcow2
|
||||||
1
bots/images/ovirt
Symbolic link
1
bots/images/ovirt
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
ovirt-f033c4457fecb1e9078eb16d7ac5239fe79455ca6b533f2a37de4f965cf174e7.qcow2
|
||||||
1
bots/images/rhel-7-7
Symbolic link
1
bots/images/rhel-7-7
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
rhel-7-7-67c37841a0ab1ead500e65acc767e7782e35d02f21ab8965ce40126c7c5cf386.qcow2
|
||||||
1
bots/images/rhel-8-0
Symbolic link
1
bots/images/rhel-8-0
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
rhel-8-0-164709a5e7b34b32da66724c6d8b7b907aa7446891d0d13383e060cd2b8b44ad.qcow2
|
||||||
1
bots/images/rhel-8-1
Symbolic link
1
bots/images/rhel-8-1
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
rhel-8-1-b6abe793117967124ff588c60516a408c40ddcd5e61bc60c3fcadd7ffebffd50.qcow2
|
||||||
1
bots/images/rhel-atomic
Symbolic link
1
bots/images/rhel-atomic
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
rhel-atomic-62290ef5921df5e247706e1fd424811884048ebb6b37109329f85256fa91c7a6.qcow2
|
||||||
78
bots/images/scripts/atomic.bootstrap
Executable file
78
bots/images/scripts/atomic.bootstrap
Executable file
|
|
@ -0,0 +1,78 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2015 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
out="$1"
|
||||||
|
base="$2"
|
||||||
|
|
||||||
|
redirect_base=$(curl -s -w "%{redirect_url}" "$base" -o /dev/null)
|
||||||
|
if [ -n "$redirect_base" ]; then
|
||||||
|
base="$redirect_base"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Lookup the newest base image recursively
|
||||||
|
url="$base"
|
||||||
|
while [ $# -gt 2 ]; do
|
||||||
|
fragment="$3"
|
||||||
|
|
||||||
|
if [ "$fragment" = "sort" ]; then
|
||||||
|
backref="$4"
|
||||||
|
pattern="$5"
|
||||||
|
|
||||||
|
result="`wget -q -O- $url | grep -oE "$pattern" | sed -E "s/${pattern}/\\\\${backref} \\0/" | sort -V -k1 | tail -1`"
|
||||||
|
fragment="`echo $result | cut -f2 -d' '`"
|
||||||
|
|
||||||
|
|
||||||
|
if [ -z "$fragment" ]; then
|
||||||
|
echo "Could not find '$pattern' at: $url" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
shift; shift
|
||||||
|
fi
|
||||||
|
|
||||||
|
base="$url"
|
||||||
|
url="$base/$fragment"
|
||||||
|
|
||||||
|
shift
|
||||||
|
done
|
||||||
|
|
||||||
|
# we link to the file so wget can properly detect if we have already downloaded it
|
||||||
|
# note that due to mirroring, timestamp comparison can result in unnecessary downloading
|
||||||
|
out_base="`dirname $out`"
|
||||||
|
intermediate="$out_base/$fragment"
|
||||||
|
|
||||||
|
if [ "$intermediate" != "$out" ]; then
|
||||||
|
wget --no-clobber --directory-prefix="$out_base" "$base/$fragment"
|
||||||
|
cp "$intermediate" "$out"
|
||||||
|
else
|
||||||
|
rm -f "$out"
|
||||||
|
wget --directory-prefix="$out_base" "$base/$fragment"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Make the image be at least 12 Gig. During boot, docker-storage-setup
|
||||||
|
# will grow the partitions etc as appropriate, and atomic.setup will
|
||||||
|
# explicitly grow the docker pool.
|
||||||
|
|
||||||
|
vsize=$(qemu-img info "$out" --output=json | python3 -c 'import json, sys; print(json.load(sys.stdin)["virtual-size"])')
|
||||||
|
|
||||||
|
if [ "$vsize" -lt 12884901888 ]; then
|
||||||
|
qemu-img resize "$out" 12884901888
|
||||||
|
fi
|
||||||
1
bots/images/scripts/candlepin.bootstrap
Symbolic link
1
bots/images/scripts/candlepin.bootstrap
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
centos-7.bootstrap
|
||||||
65
bots/images/scripts/candlepin.setup
Executable file
65
bots/images/scripts/candlepin.setup
Executable file
|
|
@ -0,0 +1,65 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
YUM_INSTALL="yum --setopt=skip_missing_names_on_install=False -y install"
|
||||||
|
|
||||||
|
# We deploy candlepin via ansible
|
||||||
|
$YUM_INSTALL epel-release
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
CANDLEPIN_DEPS="\
|
||||||
|
ansible \
|
||||||
|
git \
|
||||||
|
openssl \
|
||||||
|
"
|
||||||
|
|
||||||
|
$YUM_INSTALL $CANDLEPIN_DEPS
|
||||||
|
|
||||||
|
mkdir -p playbookdir; cd playbookdir;
|
||||||
|
|
||||||
|
mkdir -p roles
|
||||||
|
git clone https://github.com/candlepin/ansible-role-candlepin.git roles/candlepin
|
||||||
|
|
||||||
|
# Run the playbook
|
||||||
|
cat > inventory <<- EOF
|
||||||
|
[dev]
|
||||||
|
localhost
|
||||||
|
EOF
|
||||||
|
|
||||||
|
useradd -m admin
|
||||||
|
echo admin:foobar | chpasswd
|
||||||
|
echo 'admin ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/admin
|
||||||
|
|
||||||
|
cat > playbook.yml <<- EOF
|
||||||
|
- hosts: dev
|
||||||
|
|
||||||
|
environment:
|
||||||
|
JAVA_HOME: /usr/lib/jvm/java-1.8.0/
|
||||||
|
|
||||||
|
roles:
|
||||||
|
- role: candlepin
|
||||||
|
candlepin_git_pull: True
|
||||||
|
candlepin_deploy_args: "-g -a -f -t"
|
||||||
|
candlepin_user: admin
|
||||||
|
candlepin_user_home: /home/admin
|
||||||
|
candlepin_checkout: /home/admin/candlepin
|
||||||
|
EOF
|
||||||
|
|
||||||
|
ansible-playbook -i inventory -c local -v --skip-tags 'system_update' playbook.yml
|
||||||
|
|
||||||
|
rm -rf playbookdir
|
||||||
|
|
||||||
|
# reduce image size
|
||||||
|
yum clean all
|
||||||
|
/var/lib/testvm/zero-disk.setup
|
||||||
|
|
||||||
|
# Final tweaks
|
||||||
|
|
||||||
|
rm -rf /var/log/journal/*
|
||||||
|
echo "kernel.core_pattern=|/usr/lib/systemd/systemd-coredump %p %u %g %s %t %e" > /etc/sysctl.d/50-coredump.conf
|
||||||
|
|
||||||
|
# Audit events to the journal
|
||||||
|
rm -f '/etc/systemd/system/multi-user.target.wants/auditd.service'
|
||||||
|
rm -rf /var/log/audit/
|
||||||
|
|
||||||
4
bots/images/scripts/centos-7.bootstrap
Executable file
4
bots/images/scripts/centos-7.bootstrap
Executable file
|
|
@ -0,0 +1,4 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
$BASE/virt-install-fedora "$1" x86_64 "http://mirror.centos.org/centos/7/os/x86_64/"
|
||||||
8
bots/images/scripts/centos-7.install
Executable file
8
bots/images/scripts/centos-7.install
Executable file
|
|
@ -0,0 +1,8 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# remove cockpit distro packages, testing with upstream master
|
||||||
|
rpm --erase --verbose cockpit cockpit-ws cockpit-bridge cockpit-system
|
||||||
|
|
||||||
|
/var/lib/testvm/fedora.install "$@"
|
||||||
1
bots/images/scripts/centos-7.setup
Symbolic link
1
bots/images/scripts/centos-7.setup
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
rhel.setup
|
||||||
28
bots/images/scripts/cirros.bootstrap
Executable file
28
bots/images/scripts/cirros.bootstrap
Executable file
|
|
@ -0,0 +1,28 @@
|
||||||
|
#!/bin/sh
|
||||||
|
set -eux
|
||||||
|
|
||||||
|
OUTPUT="$1"
|
||||||
|
|
||||||
|
curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-i386-disk.img > "$OUTPUT"
|
||||||
|
|
||||||
|
# prepare a cloud-init iso for disabling network source, to avoid a 90s timeout at boot
|
||||||
|
WORKDIR=$(mktemp -d)
|
||||||
|
trap "rm -rf '$WORKDIR'" EXIT INT QUIT PIPE
|
||||||
|
cd "$WORKDIR"
|
||||||
|
|
||||||
|
cat > meta-data <<EOF
|
||||||
|
{ "instance-id": "nocloud" }
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat > user-data <<EOF
|
||||||
|
#!/bin/sh
|
||||||
|
set -ex
|
||||||
|
sed -i 's/configdrive *//; s/ec2 *//' /etc/cirros-init/config
|
||||||
|
(sleep 1; poweroff) &
|
||||||
|
EOF
|
||||||
|
|
||||||
|
genisoimage -input-charset utf-8 -output cloud-init.iso -volid cidata -joliet -rock user-data meta-data
|
||||||
|
|
||||||
|
# boot it once with the cloud-init ISO
|
||||||
|
qemu-system-x86_64 -enable-kvm -nographic -net none \
|
||||||
|
-drive file="$OUTPUT",if=virtio -cdrom cloud-init.iso
|
||||||
9
bots/images/scripts/continuous-atomic.bootstrap
Executable file
9
bots/images/scripts/continuous-atomic.bootstrap
Executable file
|
|
@ -0,0 +1,9 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
url="https://cloud.centos.org/centos/7/atomic/images"
|
||||||
|
prefix="CentOS-Atomic-Host-GenericCloud.qcow2"
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
$BASE/atomic.bootstrap "$1" "$url" "$prefix"
|
||||||
5
bots/images/scripts/continuous-atomic.install
Executable file
5
bots/images/scripts/continuous-atomic.install
Executable file
|
|
@ -0,0 +1,5 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
/var/lib/testvm/atomic.install --skip cockpit-sosreport --extra "/root/rpms/libssh*" --extra "/var/tmp/build-results/cockpit-dashboard*" "$@"
|
||||||
72
bots/images/scripts/continuous-atomic.setup
Executable file
72
bots/images/scripts/continuous-atomic.setup
Executable file
|
|
@ -0,0 +1,72 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2016 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
# The docker pool should grow automatically as needed, but we grow it
|
||||||
|
# explicitly here anyway. This is hopefully more reliable.
|
||||||
|
# HACK: docker falls over regularly, print its log if it does
|
||||||
|
systemctl start docker || journalctl -u docker
|
||||||
|
lvresize atomicos/root -l+50%FREE -r
|
||||||
|
if lvs atomicos/docker-pool 2>/dev/null; then
|
||||||
|
lvresize atomicos/docker-pool -l+100%FREE
|
||||||
|
elif lvs atomicos/docker-root-lv; then
|
||||||
|
lvresize atomicos/docker-root-lv -l+100%FREE
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get the centos cockpit/ws image
|
||||||
|
docker pull registry.centos.org/cockpit/ws:latest
|
||||||
|
docker tag registry.centos.org/cockpit/ws cockpit/ws
|
||||||
|
|
||||||
|
# docker images that we need for integration testing
|
||||||
|
/var/lib/testvm/docker-images.setup
|
||||||
|
|
||||||
|
# Configure core dumps
|
||||||
|
echo "kernel.core_pattern=|/usr/lib/systemd/systemd-coredump %p %u %g %s %t %e" > /etc/sysctl.d/50-coredump.conf
|
||||||
|
|
||||||
|
# Download the libssh RPM plus dependencies which we'll use for
|
||||||
|
# package overlay. The only way to do this is via a container
|
||||||
|
. /etc/os-release
|
||||||
|
REPO="updates"
|
||||||
|
if [ "$ID" = "rhel" ]; then
|
||||||
|
subscription-manager repos --enable rhel-7-server-extras-rpms
|
||||||
|
REPO="rhel-7-server-extras-rpms"
|
||||||
|
ID="rhel7"
|
||||||
|
fi
|
||||||
|
docker run --rm --volume=/etc/yum.repos.d:/etc/yum.repos.d:z --volume=/root/rpms:/tmp/rpms:rw,z "$ID:$VERSION_ID" /bin/sh -cex "yum install -y findutils createrepo_c && yum install -y --downloadonly --enablerepo=$REPO libssh && find /var -name '*.rpm' | while read rpm; do mv -v \$rpm /tmp/rpms; done; createrepo_c /tmp/rpms"
|
||||||
|
rm -f /etc/yum.repos.d/*
|
||||||
|
cat >/etc/yum.repos.d/deps.repo <<EOF
|
||||||
|
[deps]
|
||||||
|
baseurl=file:///root/rpms
|
||||||
|
enabled=1
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# Switch to continuous stream
|
||||||
|
ostree remote add --set=gpg-verify=false centos-atomic-continuous https://ci.centos.org/artifacts/sig-atomic/rdgo/centos-continuous/ostree/repo/
|
||||||
|
rpm-ostree rebase centos-atomic-continuous:centos-atomic-host/7/x86_64/devel/continuous
|
||||||
|
|
||||||
|
ostree checkout centos-atomic-continuous:centos-atomic-host/7/x86_64/devel/continuous /var/local-tree
|
||||||
|
|
||||||
|
# reduce image size
|
||||||
|
/var/lib/testvm/zero-disk.setup
|
||||||
|
|
||||||
|
# Prevent SSH from hanging for a long time when no external network access
|
||||||
|
echo 'UseDNS no' >> /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# Final tweaks
|
||||||
|
rm -rf /var/log/journal/*
|
||||||
6
bots/images/scripts/debian-stable.bootstrap
Executable file
6
bots/images/scripts/debian-stable.bootstrap
Executable file
|
|
@ -0,0 +1,6 @@
|
||||||
|
#! /bin/sh -ex
|
||||||
|
ARCH=x86_64
|
||||||
|
DEBIAN_LATEST=$(virt-builder -l | grep "$ARCH" | sort -r | grep -m1 '^debian-' | cut -d' ' -f1)
|
||||||
|
exec $(dirname $0)/lib/debian.bootstrap "$1" "$2" "$DEBIAN_LATEST" "deb http://deb.debian.org/debian stable main
|
||||||
|
deb http://deb.debian.org/debian stable-updates main
|
||||||
|
deb http://security.debian.org/ stable/updates main"
|
||||||
8
bots/images/scripts/debian-stable.install
Executable file
8
bots/images/scripts/debian-stable.install
Executable file
|
|
@ -0,0 +1,8 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
/var/lib/testvm/debian.install "$@"
|
||||||
|
|
||||||
|
# HACK: https://bugs.debian.org/914694
|
||||||
|
sed -i '/IndividualCalls/ s/=no/=yes/' /etc/firewalld/firewalld.conf
|
||||||
1
bots/images/scripts/debian-stable.setup
Symbolic link
1
bots/images/scripts/debian-stable.setup
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
debian.setup
|
||||||
4
bots/images/scripts/debian-testing.bootstrap
Executable file
4
bots/images/scripts/debian-testing.bootstrap
Executable file
|
|
@ -0,0 +1,4 @@
|
||||||
|
#! /bin/sh -ex
|
||||||
|
ARCH=x86_64
|
||||||
|
DEBIAN_LATEST=$(virt-builder -l | grep "$ARCH" | sort -r | grep -m1 '^debian-' | cut -d' ' -f1)
|
||||||
|
exec $(dirname $0)/lib/debian.bootstrap "$1" "$2" "$DEBIAN_LATEST" "deb http://deb.debian.org/debian testing main"
|
||||||
8
bots/images/scripts/debian-testing.install
Executable file
8
bots/images/scripts/debian-testing.install
Executable file
|
|
@ -0,0 +1,8 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
/var/lib/testvm/debian.install "$@"
|
||||||
|
|
||||||
|
# HACK: https://bugs.debian.org/914694
|
||||||
|
sed -i '/IndividualCalls/ s/=no/=yes/' /etc/firewalld/firewalld.conf
|
||||||
1
bots/images/scripts/debian-testing.setup
Symbolic link
1
bots/images/scripts/debian-testing.setup
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
debian.setup
|
||||||
168
bots/images/scripts/debian.setup
Executable file
168
bots/images/scripts/debian.setup
Executable file
|
|
@ -0,0 +1,168 @@
|
||||||
|
#! /bin/bash
|
||||||
|
# Shared .setup between all Debian/Ubuntu flavors
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
# Enable a console on ttyS0 so that we can log-in via vm-run.
|
||||||
|
# and make the boot up more verbose
|
||||||
|
sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT/# GRUB_CMDLINE_LINUX_DEFAULT/' /etc/default/grub
|
||||||
|
|
||||||
|
# We install all dependencies of the cockpit packages since we want
|
||||||
|
# them to not spontaneously change from one test run to the next when
|
||||||
|
# the distribution repository is updated.
|
||||||
|
#
|
||||||
|
COCKPIT_DEPS="\
|
||||||
|
cryptsetup \
|
||||||
|
docker.io \
|
||||||
|
libblockdev-mdraid2 \
|
||||||
|
libjson-glib-1.0-0 \
|
||||||
|
libpcp3 \
|
||||||
|
libpolkit-agent-1-0 \
|
||||||
|
libpolkit-gobject-1-0 \
|
||||||
|
libpwquality-tools \
|
||||||
|
libssh-4 \
|
||||||
|
libteam-utils \
|
||||||
|
libvirt-daemon-system \
|
||||||
|
libvirt-dbus \
|
||||||
|
libosinfo-bin \
|
||||||
|
network-manager \
|
||||||
|
pcp \
|
||||||
|
policykit-1 \
|
||||||
|
python3-dbus \
|
||||||
|
qemu-block-extra \
|
||||||
|
realmd \
|
||||||
|
selinux-basics \
|
||||||
|
thin-provisioning-tools \
|
||||||
|
unattended-upgrades \
|
||||||
|
tuned \
|
||||||
|
xdg-utils \
|
||||||
|
udisks2 \
|
||||||
|
udisks2-lvm2 \
|
||||||
|
"
|
||||||
|
|
||||||
|
# We also install the packages necessary to join a FreeIPA domain so
|
||||||
|
# that we don't have to go to the network during a test run.
|
||||||
|
IPA_CLIENT_PACKAGES="\
|
||||||
|
freeipa-client \
|
||||||
|
sssd-tools \
|
||||||
|
sssd-dbus \
|
||||||
|
packagekit \
|
||||||
|
"
|
||||||
|
|
||||||
|
TEST_PACKAGES="\
|
||||||
|
acl \
|
||||||
|
curl \
|
||||||
|
firewalld \
|
||||||
|
gdb \
|
||||||
|
iproute2 \
|
||||||
|
mdadm \
|
||||||
|
nfs-server \
|
||||||
|
qemu-kvm \
|
||||||
|
socat \
|
||||||
|
systemd-coredump \
|
||||||
|
virtinst \
|
||||||
|
xfsprogs \
|
||||||
|
sosreport \
|
||||||
|
"
|
||||||
|
|
||||||
|
RELEASE=$(grep -m1 ^deb /etc/apt/sources.list | awk '{print $3}')
|
||||||
|
case "$RELEASE" in
|
||||||
|
bionic)
|
||||||
|
# these packages are not in Ubuntu 18.04
|
||||||
|
COCKPIT_DEPS="${COCKPIT_DEPS/libvirt-dbus /}"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if grep -q 'ID=ubuntu' /etc/os-release; then
|
||||||
|
PBUILDER_OPTS='COMPONENTS="main universe"'
|
||||||
|
|
||||||
|
# We want to use/test NetworkManager instead of netplan/networkd for ethernets
|
||||||
|
mkdir -p /etc/NetworkManager/conf.d
|
||||||
|
touch /etc/NetworkManager/conf.d/10-globally-managed-devices.conf
|
||||||
|
fi
|
||||||
|
|
||||||
|
useradd -m -U -c Administrator -G sudo -s /bin/bash admin
|
||||||
|
echo admin:foobar | chpasswd
|
||||||
|
|
||||||
|
export DEBIAN_FRONTEND=noninteractive
|
||||||
|
apt-get -y update
|
||||||
|
DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y dist-upgrade
|
||||||
|
eatmydata apt-get -y install $TEST_PACKAGES $COCKPIT_DEPS $IPA_CLIENT_PACKAGES
|
||||||
|
[ -z "$COCKPIT_DEPS_EXPERIMENTAL" ] || eatmydata apt-get -y install $COCKPIT_DEPS_EXPERIMENTAL
|
||||||
|
|
||||||
|
# Prepare for building
|
||||||
|
#
|
||||||
|
|
||||||
|
# extract control files and adjust them for our release, so that we can parse the build deps
|
||||||
|
mkdir -p /tmp/out
|
||||||
|
curl -L https://github.com/cockpit-project/cockpit/archive/master.tar.gz | tar -C /tmp/out --strip-components=1 --wildcards -zxf - '*/debian/'
|
||||||
|
/tmp/out/tools/debian/adjust-for-release $(lsb_release -sc)
|
||||||
|
|
||||||
|
# Disable build-dep installation for the real builds
|
||||||
|
cat > ~/.pbuilderrc <<- EOF
|
||||||
|
DISTRIBUTION=$RELEASE
|
||||||
|
PBUILDERSATISFYDEPENDSCMD=true
|
||||||
|
$PBUILDER_OPTS
|
||||||
|
EOF
|
||||||
|
|
||||||
|
eatmydata apt-get -y install dpkg-dev pbuilder
|
||||||
|
|
||||||
|
pbuilder --create --extrapackages "fakeroot $PBUILDER_EXTRA"
|
||||||
|
/usr/lib/pbuilder/pbuilder-satisfydepends-classic --control /tmp/out/tools/debian/control --force-version --echo|grep apt-get | pbuilder --login --save-after-login
|
||||||
|
rm -rf /tmp/out
|
||||||
|
|
||||||
|
# Debian does not automatically start the default libvirt network
|
||||||
|
virsh net-autostart default
|
||||||
|
|
||||||
|
# Don't automatically update on boot or daily
|
||||||
|
systemctl disable apt-daily.service apt-daily.timer || true
|
||||||
|
|
||||||
|
# Enable coredumping via systemd
|
||||||
|
echo "kernel.core_pattern=|/lib/systemd/systemd-coredump %P %u %g %s %t %c %e" > /etc/sysctl.d/50-coredump.conf
|
||||||
|
printf 'DefaultLimitCORE=infinity\n' >> /etc/systemd/system.conf
|
||||||
|
|
||||||
|
# HACK: we need to restart it in case aufs-dkms was installed after docker.io
|
||||||
|
# and thus docker.io auto-switches its backend
|
||||||
|
systemctl restart docker || journalctl -u docker
|
||||||
|
I=$(docker info)
|
||||||
|
if ! echo "$I" | grep -Eq 'Storage.*(aufs|overlay)'; then
|
||||||
|
echo "ERROR! docker does not use aufs or overlayfs"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# docker images that we need for integration testing
|
||||||
|
/var/lib/testvm/docker-images.setup
|
||||||
|
|
||||||
|
rm -rf /var/lib/docker/devicemapper
|
||||||
|
|
||||||
|
# in case there are unnecessary packages
|
||||||
|
eatmydata apt-get -y autoremove || true
|
||||||
|
|
||||||
|
# reduce image size
|
||||||
|
apt-get clean
|
||||||
|
pbuilder clean
|
||||||
|
rm -f /var/cache/apt/*cache.bin
|
||||||
|
/var/lib/testvm/zero-disk.setup
|
||||||
|
|
||||||
|
# Final tweaks
|
||||||
|
|
||||||
|
# Enable persistent journal
|
||||||
|
mkdir -p /var/log/journal
|
||||||
|
|
||||||
|
# Allow root login with password
|
||||||
|
sed -i 's/^[# ]*PermitRootLogin .*/PermitRootLogin yes/' /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# At least debian-9 virt-install image only has RSA key
|
||||||
|
[ -e /etc/ssh/ssh_host_ed25519_key ] || ssh-keygen -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519
|
||||||
|
[ -e /etc/ssh/ssh_host_ecdsa_key ] || ssh-keygen -f /etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa
|
||||||
|
|
||||||
|
# Prevent SSH from hanging for a long time when no external network access
|
||||||
|
echo 'UseDNS no' >> /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# HACK: https://bugzilla.mindrot.org/show_bug.cgi?id=2512
|
||||||
|
# Disable the restarting of sshd when networking changes
|
||||||
|
ln -snf /bin/true /etc/network/if-up.d/openssh-server
|
||||||
|
|
||||||
|
# Stop showing 'To run a command as administrator (user "root"), use "sudo <command>". See "man
|
||||||
|
# sudo_root" for details.` message in admins terminal.
|
||||||
|
touch /home/admin/.sudo_as_admin_successful
|
||||||
21
bots/images/scripts/fedora-23-stock.bootstrap
Executable file
21
bots/images/scripts/fedora-23-stock.bootstrap
Executable file
|
|
@ -0,0 +1,21 @@
|
||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (C) 2015 Red Hat Inc.
|
||||||
|
#
|
||||||
|
# This program is free software; you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program; if not, write to the Free Software
|
||||||
|
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
|
||||||
|
# 02110-1301 USA.
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
$BASE/virt-install-fedora "$1" x86_64 "https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/23/Server/x86_64/os/"
|
||||||
11
bots/images/scripts/fedora-23-stock.setup
Executable file
11
bots/images/scripts/fedora-23-stock.setup
Executable file
|
|
@ -0,0 +1,11 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
useradd -c Administrator -G wheel admin
|
||||||
|
echo foobar | passwd --stdin admin
|
||||||
|
|
||||||
|
dnf -y update
|
||||||
|
dnf -y install fedora-release-server
|
||||||
|
firewall-cmd --permanent --add-service cockpit
|
||||||
|
|
||||||
|
# Phantom can't use TLS..
|
||||||
|
sed -i -e 's/ExecStart=.*/\0 --no-tls/' /usr/lib/systemd/system/cockpit.service
|
||||||
21
bots/images/scripts/fedora-29.bootstrap
Executable file
21
bots/images/scripts/fedora-29.bootstrap
Executable file
|
|
@ -0,0 +1,21 @@
|
||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018 Red Hat Inc.
|
||||||
|
#
|
||||||
|
# This program is free software; you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program; if not, write to the Free Software
|
||||||
|
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
|
||||||
|
# 02110-1301 USA.
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
$BASE/virt-install-fedora "$1" x86_64 "http://dl.fedoraproject.org/pub/fedora/linux/releases/29/Server/x86_64/os/"
|
||||||
4
bots/images/scripts/fedora-29.install
Executable file
4
bots/images/scripts/fedora-29.install
Executable file
|
|
@ -0,0 +1,4 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
/var/lib/testvm/fedora.install "$@"
|
||||||
1
bots/images/scripts/fedora-29.setup
Symbolic link
1
bots/images/scripts/fedora-29.setup
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora.setup
|
||||||
21
bots/images/scripts/fedora-30.bootstrap
Executable file
21
bots/images/scripts/fedora-30.bootstrap
Executable file
|
|
@ -0,0 +1,21 @@
|
||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (C) 2019 Red Hat Inc.
|
||||||
|
#
|
||||||
|
# This program is free software; you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program; if not, write to the Free Software
|
||||||
|
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
|
||||||
|
# 02110-1301 USA.
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
$BASE/virt-install-fedora "$1" x86_64 "http://dl.fedoraproject.org/pub/fedora/linux/releases/30/Server/x86_64/os/"
|
||||||
4
bots/images/scripts/fedora-30.install
Executable file
4
bots/images/scripts/fedora-30.install
Executable file
|
|
@ -0,0 +1,4 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
/var/lib/testvm/fedora.install "$@"
|
||||||
1
bots/images/scripts/fedora-30.setup
Symbolic link
1
bots/images/scripts/fedora-30.setup
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora.setup
|
||||||
14
bots/images/scripts/fedora-atomic.bootstrap
Executable file
14
bots/images/scripts/fedora-atomic.bootstrap
Executable file
|
|
@ -0,0 +1,14 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
url="https://download.fedoraproject.org/pub/alt/atomic/stable/"
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
|
||||||
|
# The Fedora URLs have the version twice in the name. for example:
|
||||||
|
# https://dl.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-28-20180425.0/AtomicHost/x86_64/images/Fedora-AtomicHost-28-20180425.0.x86_64.qcow2
|
||||||
|
$BASE/atomic.bootstrap "$1" "$url" \
|
||||||
|
sort 3 "Fedora(-atomic)?-[0-9][0-9](-updates)?-([-0-9\.]+)" \
|
||||||
|
"AtomicHost" "x86_64" "images" \
|
||||||
|
sort 1 "Fedora-AtomicHost-([-0-9\.]+).x86_64.qcow2"
|
||||||
9
bots/images/scripts/fedora-atomic.install
Executable file
9
bots/images/scripts/fedora-atomic.install
Executable file
|
|
@ -0,0 +1,9 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
/var/lib/testvm/atomic.install --verbose --skip cockpit-kdump --extra "/root/rpms/libssh*" "$@"
|
||||||
|
|
||||||
|
# HACK: https://github.com/projectatomic/rpm-ostree/issues/1360
|
||||||
|
# rpm-ostree upgrade --check otherwise fails
|
||||||
|
mkdir -p /var/cache/rpm-ostree
|
||||||
18
bots/images/scripts/fedora-atomic.setup
Executable file
18
bots/images/scripts/fedora-atomic.setup
Executable file
|
|
@ -0,0 +1,18 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
# HACK: https://bugzilla.redhat.com/show_bug.cgi?id=1341829
|
||||||
|
# SELinux breaks coredumping on fedora-25
|
||||||
|
printf '(allow init_t domain (process (rlimitinh)))\n' > domain.cil
|
||||||
|
semodule -i domain.cil
|
||||||
|
|
||||||
|
# HACK: docker falls over regularly, print its log if it does
|
||||||
|
systemctl start docker || journalctl -u docker
|
||||||
|
|
||||||
|
os=$(ls /ostree/repo/refs/remotes/fedora-atomic/*/)
|
||||||
|
docker pull "registry.fedoraproject.org/f$os/cockpit"
|
||||||
|
docker tag "registry.fedoraproject.org/f$os/cockpit" cockpit/ws
|
||||||
|
|
||||||
|
|
||||||
|
/var/lib/testvm/atomic.setup
|
||||||
21
bots/images/scripts/fedora-i386.bootstrap
Executable file
21
bots/images/scripts/fedora-i386.bootstrap
Executable file
|
|
@ -0,0 +1,21 @@
|
||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Copyright (C) 2019 Red Hat Inc.
|
||||||
|
#
|
||||||
|
# This program is free software; you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program; if not, write to the Free Software
|
||||||
|
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
|
||||||
|
# 02110-1301 USA.
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
$BASE/virt-install-fedora "$1" i386 "https://dl.fedoraproject.org/pub/fedora-secondary/releases/30/Server/i386/os/"
|
||||||
1
bots/images/scripts/fedora-i386.install
Symbolic link
1
bots/images/scripts/fedora-i386.install
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-30.install
|
||||||
1
bots/images/scripts/fedora-i386.setup
Symbolic link
1
bots/images/scripts/fedora-i386.setup
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora.setup
|
||||||
11
bots/images/scripts/fedora-stock.setup
Executable file
11
bots/images/scripts/fedora-stock.setup
Executable file
|
|
@ -0,0 +1,11 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
useradd -c Administrator -G wheel admin
|
||||||
|
echo foobar | passwd --stdin admin
|
||||||
|
|
||||||
|
dnf -y update
|
||||||
|
dnf -y install fedora-release-server
|
||||||
|
firewall-cmd --permanent --add-service cockpit
|
||||||
|
|
||||||
|
# Phantom can't use TLS..
|
||||||
|
sed -i -e 's/ExecStart=.*/\0 --no-tls/' /usr/lib/systemd/system/cockpit.service
|
||||||
1
bots/images/scripts/fedora-testing.bootstrap
Symbolic link
1
bots/images/scripts/fedora-testing.bootstrap
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-30.bootstrap
|
||||||
1
bots/images/scripts/fedora-testing.install
Symbolic link
1
bots/images/scripts/fedora-testing.install
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-30.install
|
||||||
1
bots/images/scripts/fedora-testing.setup
Symbolic link
1
bots/images/scripts/fedora-testing.setup
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora.setup
|
||||||
193
bots/images/scripts/fedora.setup
Executable file
193
bots/images/scripts/fedora.setup
Executable file
|
|
@ -0,0 +1,193 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
IMAGE="$1"
|
||||||
|
|
||||||
|
# avoid failures when running image builds in a non-English locale (ssh transfers the host environment)
|
||||||
|
unset LANGUAGE
|
||||||
|
unset LANG
|
||||||
|
export LC_ALL=C.utf8
|
||||||
|
|
||||||
|
# keep this in sync with avocado/selenium image mapping in bots/tests-invoke
|
||||||
|
if [ "$IMAGE" = fedora-30 ]; then
|
||||||
|
AVOCADO=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# HACK - virt-resize might not be able to resize our xfs rootfs,
|
||||||
|
# depending on how it was compiled and which plugins are installed,
|
||||||
|
# and will just silently not do it. So we do it here.
|
||||||
|
#
|
||||||
|
xfs_growfs /
|
||||||
|
df -h /
|
||||||
|
|
||||||
|
echo foobar | passwd --stdin root
|
||||||
|
|
||||||
|
HAVE_KUBERNETES=
|
||||||
|
if [ $(uname -m) = x86_64 ]; then
|
||||||
|
HAVE_KUBERNETES=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# We install all dependencies of the cockpit packages since we want
|
||||||
|
# them to not spontaneously change from one test run to the next when
|
||||||
|
# the distribution repository is updated.
|
||||||
|
#
|
||||||
|
COCKPIT_DEPS="\
|
||||||
|
atomic \
|
||||||
|
device-mapper-multipath \
|
||||||
|
docker \
|
||||||
|
etcd \
|
||||||
|
glib-networking \
|
||||||
|
json-glib \
|
||||||
|
kexec-tools \
|
||||||
|
libssh \
|
||||||
|
libvirt-daemon-kvm \
|
||||||
|
libvirt-client \
|
||||||
|
libvirt-dbus \
|
||||||
|
NetworkManager-team \
|
||||||
|
openssl \
|
||||||
|
PackageKit \
|
||||||
|
pcp \
|
||||||
|
pcp-libs \
|
||||||
|
qemu \
|
||||||
|
realmd \
|
||||||
|
selinux-policy-targeted \
|
||||||
|
setroubleshoot-server \
|
||||||
|
sos \
|
||||||
|
sscg \
|
||||||
|
system-logos \
|
||||||
|
subscription-manager \
|
||||||
|
tuned \
|
||||||
|
virt-install \
|
||||||
|
"
|
||||||
|
|
||||||
|
COCKPIT_DEPS="$COCKPIT_DEPS udisks2 udisks2-lvm2 udisks2-iscsi"
|
||||||
|
|
||||||
|
[ -z "$HAVE_KUBERNETES" ] || COCKPIT_DEPS="$COCKPIT_DEPS kubernetes"
|
||||||
|
|
||||||
|
# We also install the packages necessary to join a FreeIPA domain so
|
||||||
|
# that we don't have to go to the network during a test run.
|
||||||
|
#
|
||||||
|
IPA_CLIENT_PACKAGES="\
|
||||||
|
freeipa-client \
|
||||||
|
oddjob \
|
||||||
|
oddjob-mkhomedir \
|
||||||
|
sssd \
|
||||||
|
sssd-dbus \
|
||||||
|
libsss_sudo \
|
||||||
|
"
|
||||||
|
|
||||||
|
TEST_PACKAGES="\
|
||||||
|
systemtap-runtime-virtguest \
|
||||||
|
valgrind \
|
||||||
|
gdb \
|
||||||
|
targetcli \
|
||||||
|
dnf-automatic \
|
||||||
|
cryptsetup \
|
||||||
|
clevis-luks \
|
||||||
|
socat \
|
||||||
|
tang \
|
||||||
|
podman \
|
||||||
|
libvirt-daemon-config-network \
|
||||||
|
"
|
||||||
|
|
||||||
|
# HACK - For correct work of ABRT in Fedora 26 Alpha release a following
|
||||||
|
# packages are necessary. In Fedora 26 Beta and later these packages should be
|
||||||
|
# installed by default. See https://bugzilla.redhat.com/show_bug.cgi?id=1436941
|
||||||
|
#
|
||||||
|
ABRT_PACKAGES="\
|
||||||
|
abrt-desktop \
|
||||||
|
libreport-plugin-systemd-journal \
|
||||||
|
"
|
||||||
|
|
||||||
|
rm -rf /etc/sysconfig/iptables
|
||||||
|
|
||||||
|
maybe() { if type "$1" >/dev/null 2>&1; then "$@"; fi; }
|
||||||
|
|
||||||
|
# For the D-Bus test server
|
||||||
|
maybe firewall-cmd --permanent --add-port 8765/tcp
|
||||||
|
|
||||||
|
echo 'NETWORKING=yes' > /etc/sysconfig/network
|
||||||
|
|
||||||
|
useradd -c Administrator -G wheel admin
|
||||||
|
echo foobar | passwd --stdin admin
|
||||||
|
|
||||||
|
if [ "${IMAGE%-i386}" != "$IMAGE" ]; then
|
||||||
|
TEST_PACKAGES="${TEST_PACKAGES/podman /}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${IMAGE%-testing}" != "$IMAGE" ]; then
|
||||||
|
dnf config-manager --set-enabled updates-testing
|
||||||
|
fi
|
||||||
|
|
||||||
|
dnf $DNF_OPTS -y upgrade
|
||||||
|
dnf $DNF_OPTS -y install $TEST_PACKAGES $COCKPIT_DEPS $IPA_CLIENT_PACKAGES $ABRT_PACKAGES
|
||||||
|
|
||||||
|
if [ -n "$AVOCADO" ]; then
|
||||||
|
|
||||||
|
# enable python3 avocado support repository
|
||||||
|
dnf module install -y avocado:69lts
|
||||||
|
|
||||||
|
dnf $DNF_OPTS -y install \
|
||||||
|
fontconfig \
|
||||||
|
npm \
|
||||||
|
chromium-headless \
|
||||||
|
python3-libvirt \
|
||||||
|
python3-avocado \
|
||||||
|
python3-avocado-plugins-output-html \
|
||||||
|
python3-selenium
|
||||||
|
|
||||||
|
npm -g install chrome-remote-interface
|
||||||
|
echo 'NODE_PATH=/usr/lib/node_modules' >> /etc/environment
|
||||||
|
fi
|
||||||
|
|
||||||
|
dnf $DNF_OPTS -y install mock dnf-plugins-core rpm-build
|
||||||
|
useradd -c Builder -G mock builder
|
||||||
|
|
||||||
|
if [ "${IMAGE%-testing}" != "$IMAGE" ]; then
|
||||||
|
# Enable updates-testing in mock
|
||||||
|
echo "config_opts['yum.conf'] += '[updates-testing]\nenabled=1'" >>/etc/mock/default.cfg
|
||||||
|
fi
|
||||||
|
|
||||||
|
# HACK - mock --installdeps is broken, it seems that it forgets to
|
||||||
|
# copy the source rpm to a location that dnf can actually access. A
|
||||||
|
# workaround is to pass "--no-bootstrap-chroot".
|
||||||
|
#
|
||||||
|
# When you remove this hack, also remove it in fedora-*.install.
|
||||||
|
#
|
||||||
|
# https://bugzilla.redhat.com/show_bug.cgi?id=1447627
|
||||||
|
|
||||||
|
opsys=$(cut -d '-' -f 1 <<< "$IMAGE")
|
||||||
|
version=$(cut -d '-' -f 2 <<< "$IMAGE")
|
||||||
|
# If version is not number (testing/i386) then use Fedora 30
|
||||||
|
if ! [ "$version" -eq "$version" ] 2>/dev/null; then version=30; fi
|
||||||
|
|
||||||
|
su builder -c "/usr/bin/mock --no-bootstrap-chroot --verbose -i $(/var/lib/testvm/build-deps.sh "$opsys $version")"
|
||||||
|
su builder -c "/usr/bin/mock --install --verbose rpmlint"
|
||||||
|
|
||||||
|
# HACK: docker falls over regularly, print its log if it does
|
||||||
|
systemctl start docker || journalctl -u docker
|
||||||
|
|
||||||
|
# our cockpit/base container is only really a thing on x86_64, just skip it on other arches
|
||||||
|
if [ $(uname -m) = x86_64 ]; then
|
||||||
|
docker build -t cockpit/base /var/tmp/cockpit-base
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Configure kubernetes
|
||||||
|
[ -z "$HAVE_KUBERNETES" ] || /var/lib/testvm/kubernetes.setup
|
||||||
|
|
||||||
|
# docker images that we need for integration testing
|
||||||
|
/var/lib/testvm/docker-images.setup
|
||||||
|
|
||||||
|
# reduce image size
|
||||||
|
dnf clean all
|
||||||
|
/var/lib/testvm/zero-disk.setup
|
||||||
|
|
||||||
|
ln -sf ../selinux/config /etc/sysconfig/selinux
|
||||||
|
printf "SELINUX=enforcing\nSELINUXTYPE=targeted\n" > /etc/selinux/config
|
||||||
|
|
||||||
|
# Prevent SSH from hanging for a long time when no external network access
|
||||||
|
echo 'UseDNS no' >> /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# Audit events to the journal
|
||||||
|
rm -f '/etc/systemd/system/multi-user.target.wants/auditd.service'
|
||||||
|
rm -rf /var/log/audit/
|
||||||
1
bots/images/scripts/ipa.bootstrap
Symbolic link
1
bots/images/scripts/ipa.bootstrap
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
fedora-29.bootstrap
|
||||||
49
bots/images/scripts/ipa.setup
Executable file
49
bots/images/scripts/ipa.setup
Executable file
|
|
@ -0,0 +1,49 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -eufx
|
||||||
|
|
||||||
|
# ipa requires an UTF-8 locale
|
||||||
|
export LC_ALL=C.UTF-8
|
||||||
|
|
||||||
|
echo foobar | passwd --stdin root
|
||||||
|
|
||||||
|
dnf -y remove firewalld
|
||||||
|
dnf -y update
|
||||||
|
dnf -y install freeipa-server freeipa-server-dns bind bind-dyndb-ldap iptables
|
||||||
|
|
||||||
|
iptables -F
|
||||||
|
|
||||||
|
nmcli con add con-name "static-eth1" ifname eth1 type ethernet ip4 "10.111.112.100/20" ipv4.dns "10.111.112.100" gw4 "10.111.112.1"
|
||||||
|
nmcli con up "static-eth1"
|
||||||
|
hostnamectl set-hostname f0.cockpit.lan
|
||||||
|
|
||||||
|
# Let's make sure that ipa-server-install doesn't block on
|
||||||
|
# /dev/random.
|
||||||
|
#
|
||||||
|
rm -f /dev/random
|
||||||
|
ln -s /dev/urandom /dev/random
|
||||||
|
|
||||||
|
ipa-server-install -U -p foobarfoo -a foobarfoo -n cockpit.lan -r COCKPIT.LAN --setup-dns --no-forwarders
|
||||||
|
|
||||||
|
# Make sure any initial password change is overridden
|
||||||
|
printf 'foobarfoo\nfoobarfoo\nfoobarfoo\n' | kinit admin@COCKPIT.LAN
|
||||||
|
|
||||||
|
# Default password expiry of 90 days is impractical
|
||||||
|
ipa pwpolicy-mod --minlife=0 --maxlife=1000
|
||||||
|
# Change password to apply new password policy
|
||||||
|
printf 'foobarfoo\nfoobarfoo\n' | ipa user-mod --password admin
|
||||||
|
ipa user-show --all admin
|
||||||
|
|
||||||
|
# Allow "admins" IPA group members to run sudo
|
||||||
|
# This is an "unbreak my setup" step and ought to happen by default.
|
||||||
|
# See https://pagure.io/freeipa/issue/7538
|
||||||
|
ipa-advise enable-admins-sudo | sh -ex
|
||||||
|
|
||||||
|
ipa dnsconfig-mod --forwarder=8.8.8.8
|
||||||
|
|
||||||
|
ln -sf ../selinux/config /etc/sysconfig/selinux
|
||||||
|
echo 'SELINUX=permissive' > /etc/selinux/config
|
||||||
|
|
||||||
|
# reduce image size
|
||||||
|
dnf clean all
|
||||||
|
/var/lib/testvm/zero-disk.setup
|
||||||
303
bots/images/scripts/lib/atomic.install
Executable file
303
bots/images/scripts/lib/atomic.install
Executable file
|
|
@ -0,0 +1,303 @@
|
||||||
|
#!/usr/bin/python2
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2015 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import shutil
|
||||||
|
try:
|
||||||
|
from urllib.request import URLopener
|
||||||
|
except ImportError:
|
||||||
|
from urllib import URLopener # Python 2
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
|
||||||
|
BASEDIR = os.path.dirname(__file__)
|
||||||
|
|
||||||
|
class AtomicCockpitInstaller:
|
||||||
|
branch = None
|
||||||
|
checkout_location = "/var/local-tree"
|
||||||
|
repo_location = "/var/local-repo"
|
||||||
|
rpm_location = "/usr/share/rpm"
|
||||||
|
key_id = "95A8BA1754D0E95E2B3A98A7EE15015654780CBD"
|
||||||
|
port = 12345
|
||||||
|
|
||||||
|
# Support installing random packages if needed.
|
||||||
|
external_packages = {}
|
||||||
|
|
||||||
|
# Temporarily force cockpit-system instead of cockpit-shell
|
||||||
|
packages_force_install = [ "cockpit-system",
|
||||||
|
"cockpit-docker",
|
||||||
|
"cockpit-kdump",
|
||||||
|
"cockpit-networkmanager",
|
||||||
|
"cockpit-sosreport" ]
|
||||||
|
|
||||||
|
def __init__(self, rpms=None, extra_rpms=None, verbose=False):
|
||||||
|
self.verbose = verbose
|
||||||
|
self.rpms = rpms
|
||||||
|
self.extra_rpms = extra_rpms
|
||||||
|
status = json.loads(subprocess.check_output(["rpm-ostree", "status", "--json"], universal_newlines=True))
|
||||||
|
origin = None
|
||||||
|
for deployment in status.get("deployments", []):
|
||||||
|
if deployment.get("booted"):
|
||||||
|
origin = deployment["origin"]
|
||||||
|
|
||||||
|
if not origin:
|
||||||
|
raise Exception("Couldn't find origin")
|
||||||
|
|
||||||
|
self.branch = origin.split(":", 1)[-1]
|
||||||
|
|
||||||
|
def setup_dirs(self):
|
||||||
|
if self.verbose:
|
||||||
|
print("setting up new ostree repo")
|
||||||
|
|
||||||
|
try:
|
||||||
|
shutil.rmtree(self.repo_location)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
os.makedirs(self.repo_location)
|
||||||
|
subprocess.check_call(["ostree", "init", "--repo", self.repo_location,
|
||||||
|
"--mode", "archive-z2"])
|
||||||
|
|
||||||
|
if not os.path.exists(self.checkout_location):
|
||||||
|
if self.verbose:
|
||||||
|
print("cloning current branch")
|
||||||
|
|
||||||
|
subprocess.check_call(["ostree", "checkout", self.branch,
|
||||||
|
self.checkout_location])
|
||||||
|
|
||||||
|
# move /usr/etc to /etc, makes rpm installs easier
|
||||||
|
subprocess.check_call(["mv", os.path.join(self.checkout_location, "usr", "etc"),
|
||||||
|
os.path.join(self.checkout_location, "etc")])
|
||||||
|
|
||||||
|
def switch_to_local_tree(self):
|
||||||
|
if self.verbose:
|
||||||
|
print("install new ostree commit")
|
||||||
|
|
||||||
|
# Not an error if this fails
|
||||||
|
subprocess.call(["ostree", "remote", "delete", "local"])
|
||||||
|
|
||||||
|
subprocess.check_call(["ostree", "remote", "add", "local",
|
||||||
|
"file://{}".format(self.repo_location),
|
||||||
|
"--no-gpg-verify"])
|
||||||
|
|
||||||
|
# HACK: https://github.com/candlepin/subscription-manager/issues/1404
|
||||||
|
subprocess.call(["systemctl", "disable", "rhsmcertd"])
|
||||||
|
subprocess.call(["systemctl", "stop", "rhsmcertd"])
|
||||||
|
|
||||||
|
status = subprocess.check_output(["rpm-ostree", "status"])
|
||||||
|
if b"local:" in status:
|
||||||
|
subprocess.check_call(["rpm-ostree", "upgrade"])
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
subprocess.check_call(["setenforce", "0"])
|
||||||
|
subprocess.check_call(["rpm-ostree", "rebase",
|
||||||
|
"local:{0}".format(self.branch)])
|
||||||
|
except:
|
||||||
|
os.system("sysctl kernel.core_pattern")
|
||||||
|
os.system("coredumpctl || true")
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
subprocess.check_call(["setenforce", "1"])
|
||||||
|
|
||||||
|
def commit_to_repo(self):
|
||||||
|
if self.verbose:
|
||||||
|
print("commit package changes to our repo")
|
||||||
|
|
||||||
|
# move etc back to /usr/etc
|
||||||
|
subprocess.check_call(["mv", os.path.join(self.checkout_location, "etc"),
|
||||||
|
os.path.join(self.checkout_location, "usr", "etc")])
|
||||||
|
|
||||||
|
subprocess.check_call(["ostree", "commit", "-s", "cockpit-tree",
|
||||||
|
"--repo", self.repo_location,
|
||||||
|
"-b", self.branch,
|
||||||
|
"--add-metadata-string", "version=cockpit-base.1",
|
||||||
|
"--tree=dir={0}".format(self.checkout_location),
|
||||||
|
"--gpg-sign={0}".format(self.key_id),
|
||||||
|
"--gpg-homedir={0}".format(BASEDIR)])
|
||||||
|
|
||||||
|
def install_packages(self, packages, deps=True, replace=False):
|
||||||
|
args = ["rpm", "-U", "--root", self.checkout_location,
|
||||||
|
"--dbpath", self.rpm_location]
|
||||||
|
|
||||||
|
if replace:
|
||||||
|
args.extend(["--replacepkgs", "--replacefiles"])
|
||||||
|
|
||||||
|
if not deps:
|
||||||
|
args.append("--nodeps")
|
||||||
|
|
||||||
|
for package in packages:
|
||||||
|
args.append(os.path.abspath(os.path.join(os.getcwd(), package)))
|
||||||
|
|
||||||
|
subprocess.check_call(args)
|
||||||
|
|
||||||
|
def remove_packages(self, packages):
|
||||||
|
args = ["rpm", "-e", "--root", self.checkout_location,
|
||||||
|
"--dbpath", self.rpm_location]
|
||||||
|
args.extend(packages)
|
||||||
|
subprocess.check_call(args)
|
||||||
|
|
||||||
|
def package_basename(self, package):
|
||||||
|
""" only accept package with the name 'cockpit-%s-*' and return 'cockpit-%s' or None"""
|
||||||
|
basename = "-".join(package.split("-")[:2])
|
||||||
|
if basename.startswith("cockpit-"):
|
||||||
|
return basename
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def update_container(self):
|
||||||
|
""" Install the latest cockpit RPMs in our container"""
|
||||||
|
rpm_args = []
|
||||||
|
for package in self.rpms:
|
||||||
|
if 'cockpit-ws' in package or 'cockpit-dashboard' in package or 'cockpit-bridge' in package:
|
||||||
|
rpm_args.append("/host" + package)
|
||||||
|
extra_args = []
|
||||||
|
for package in self.extra_rpms:
|
||||||
|
extra_args.append("/host" + package)
|
||||||
|
|
||||||
|
if rpm_args:
|
||||||
|
subprocess.check_call(["docker", "run", "--name", "build-cockpit",
|
||||||
|
"-d", "--privileged", "-v", "/:/host",
|
||||||
|
"cockpit/ws", "sleep", "1d"])
|
||||||
|
if self.verbose:
|
||||||
|
print("updating cockpit-ws container")
|
||||||
|
|
||||||
|
if extra_args:
|
||||||
|
subprocess.check_call(["docker", "exec", "build-cockpit",
|
||||||
|
"rpm", "--install", "--verbose", "--force"] + extra_args)
|
||||||
|
|
||||||
|
subprocess.check_call(["docker", "exec", "build-cockpit",
|
||||||
|
"rpm", "--freshen", "--verbose", "--force"] + rpm_args)
|
||||||
|
|
||||||
|
# if we update the RPMs, also update the scripts, to keep them in sync
|
||||||
|
subprocess.check_call(["docker", "exec", "build-cockpit", "sh", "-exc",
|
||||||
|
"cp /host/var/tmp/containers/ws/atomic-* /container/"])
|
||||||
|
|
||||||
|
subprocess.check_call(["docker", "commit", "build-cockpit",
|
||||||
|
"cockpit/ws"])
|
||||||
|
subprocess.check_call(["docker", "kill", "build-cockpit"])
|
||||||
|
subprocess.check_call(["docker", "rm", "build-cockpit"])
|
||||||
|
|
||||||
|
def package_basenames(self, package_names):
|
||||||
|
""" convert a list of package names to a list of their basenames """
|
||||||
|
return list(filter(lambda s: s is not None, map(self.package_basename, package_names)))
|
||||||
|
|
||||||
|
def get_installed_cockpit_packages(self):
|
||||||
|
""" get list installed cockpit packages """
|
||||||
|
packages = subprocess.check_output("rpm -qa | grep cockpit", shell=True, universal_newlines=True)
|
||||||
|
|
||||||
|
if self.verbose:
|
||||||
|
print("installed packages: {0}".format(packages))
|
||||||
|
|
||||||
|
installed_packages = packages.strip().split("\n")
|
||||||
|
return installed_packages
|
||||||
|
|
||||||
|
def clean_network(self):
|
||||||
|
if self.verbose:
|
||||||
|
print("clean network configuration:")
|
||||||
|
subprocess.check_call(["rm", "-rf", "/var/lib/NetworkManager"])
|
||||||
|
subprocess.check_call(["rm", "-rf", "/var/lib/dhcp"])
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
# Delete previous deployment if it's present
|
||||||
|
output = subprocess.check_output(["ostree", "admin", "status"])
|
||||||
|
if output.count(b"origin refspec") != 1:
|
||||||
|
subprocess.check_call(["ostree", "admin", "undeploy", "1"])
|
||||||
|
|
||||||
|
self.setup_dirs()
|
||||||
|
|
||||||
|
installed_packages = self.get_installed_cockpit_packages()
|
||||||
|
self.remove_packages(installed_packages)
|
||||||
|
|
||||||
|
packages_to_install = self.package_basenames(installed_packages)
|
||||||
|
for p in self.packages_force_install:
|
||||||
|
if not p in packages_to_install:
|
||||||
|
if self.verbose:
|
||||||
|
print("adding package %s (forced)" % (p))
|
||||||
|
packages_to_install.append(p)
|
||||||
|
|
||||||
|
packages_to_install = list(filter(lambda p: any(os.path.split(p)[1].startswith(base) for base in packages_to_install), self.rpms))
|
||||||
|
|
||||||
|
if self.verbose:
|
||||||
|
print("packages to install:")
|
||||||
|
print(packages_to_install)
|
||||||
|
|
||||||
|
if self.external_packages:
|
||||||
|
names = self.external_packages.keys()
|
||||||
|
if self.verbose:
|
||||||
|
print("external packages to install:")
|
||||||
|
print(list(names))
|
||||||
|
|
||||||
|
downloader = URLopener()
|
||||||
|
for name, url in self.external_packages.items():
|
||||||
|
downloader.retrieve(url, name)
|
||||||
|
|
||||||
|
self.install_packages(names, replace=True)
|
||||||
|
|
||||||
|
for name in names:
|
||||||
|
os.remove(name)
|
||||||
|
|
||||||
|
self.install_packages(packages_to_install)
|
||||||
|
no_deps = [x for x in self.rpms \
|
||||||
|
if os.path.split(x)[-1].startswith("cockpit-tests") or
|
||||||
|
os.path.split(x)[-1].startswith("cockpit-machines")]
|
||||||
|
self.install_packages(no_deps, deps=False, replace=True)
|
||||||
|
|
||||||
|
# If firewalld is installed, we need to poke a hole for cockpit, so
|
||||||
|
# that we can run firewall tests on it (change firewall-cmd to
|
||||||
|
# --add-service=cockpit once all supported atomics ship with the
|
||||||
|
# service file)
|
||||||
|
if subprocess.call(["systemctl", "enable", "--now", "firewalld"]) == 0:
|
||||||
|
subprocess.call(["firewall-cmd", "--permanent", "--add-port=9090/tcp"])
|
||||||
|
|
||||||
|
self.commit_to_repo()
|
||||||
|
self.switch_to_local_tree()
|
||||||
|
self.update_container()
|
||||||
|
self.clean_network()
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(description='Install Cockpit in Atomic')
|
||||||
|
parser.add_argument('-v', '--verbose', action='store_true', help='Display verbose progress details')
|
||||||
|
parser.add_argument('-q', '--quick', action='store_true', help='Build faster')
|
||||||
|
parser.add_argument('--build', action='store_true', help='Build')
|
||||||
|
parser.add_argument('--install', action='store_true', help='Install')
|
||||||
|
parser.add_argument('--extra', action='append', default=[], help='Extra packages to install inside the container')
|
||||||
|
parser.add_argument('--skip', action='append', default=[], help='Packes to skip during installation')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.build:
|
||||||
|
sys.stderr.write("Can't build on Atomic\n")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if args.install:
|
||||||
|
os.chdir("build-results")
|
||||||
|
# Force skip cockpit-dashboard
|
||||||
|
if args.skip:
|
||||||
|
skip = list(args.skip)
|
||||||
|
else:
|
||||||
|
skip = []
|
||||||
|
skip.append("cockpit-dashboard")
|
||||||
|
|
||||||
|
rpms = [os.path.abspath(f) for f in os.listdir(".")
|
||||||
|
if (f.endswith(".rpm") and not f.endswith(".src.rpm")
|
||||||
|
and not any(f.startswith(s) for s in args.skip))]
|
||||||
|
cockpit_installer = AtomicCockpitInstaller(rpms=rpms, extra_rpms=args.extra, verbose=args.verbose)
|
||||||
|
cockpit_installer.run()
|
||||||
|
|
||||||
|
# vim: ft=python
|
||||||
78
bots/images/scripts/lib/atomic.setup
Executable file
78
bots/images/scripts/lib/atomic.setup
Executable file
|
|
@ -0,0 +1,78 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2015 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
# The docker pool should grow automatically as needed, but we grow it
|
||||||
|
# explicitly here anyway. This is hopefully more reliable.
|
||||||
|
# Newer Fedora versions configure docker to use the root LV
|
||||||
|
# HACK: docker falls over regularly, print its log if it does
|
||||||
|
systemctl start docker || journalctl -u docker
|
||||||
|
lvresize atomicos/root -l+60%FREE -r
|
||||||
|
if lvs atomicos/docker-pool 2>/dev/null; then
|
||||||
|
lvresize atomicos/docker-pool -l+100%FREE
|
||||||
|
elif lvs atomicos/docker-root-lv; then
|
||||||
|
lvresize atomicos/docker-root-lv -l+100%FREE
|
||||||
|
fi
|
||||||
|
|
||||||
|
# docker images that we need for integration testing
|
||||||
|
/var/lib/testvm/docker-images.setup
|
||||||
|
|
||||||
|
# Download the libssh RPM plus dependencies which we'll use for
|
||||||
|
# package overlay. The only way to do this is via a container
|
||||||
|
. /etc/os-release
|
||||||
|
REPO="updates"
|
||||||
|
if [ "$ID" = "rhel" ]; then
|
||||||
|
subscription-manager repos --enable rhel-7-server-extras-rpms
|
||||||
|
REPO="rhel-7-server-extras-rpms"
|
||||||
|
ID="rhel7"
|
||||||
|
fi
|
||||||
|
docker run --rm --volume=/etc/yum.repos.d:/etc/yum.repos.d:z --volume=/root/rpms:/tmp/rpms:rw,z "$ID:$VERSION_ID" /bin/sh -cex "yum install -y findutils createrepo yum-utils && (cd /tmp/; yumdownloader --enablerepo=$REPO libssh) && find /tmp -name '*.$(uname -m).*rpm' | while read rpm; do mv -v \$rpm /tmp/rpms; done; createrepo /tmp/rpms"
|
||||||
|
rm -f /etc/yum.repos.d/*
|
||||||
|
cat >/etc/yum.repos.d/deps.repo <<EOF
|
||||||
|
[deps]
|
||||||
|
baseurl=file:///root/rpms
|
||||||
|
enabled=1
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# fully upgrade host. Anything past this point can't touch /etc
|
||||||
|
# Upgrade host if there is a valid upgrade available (we might be on a RC)
|
||||||
|
if rpm-ostree upgrade --check; then
|
||||||
|
atomic host upgrade
|
||||||
|
# HACK - Find a better way to compute the ref.
|
||||||
|
# https://lists.projectatomic.io/projectatomic-archives/atomic-devel/2016-July/msg00015.html
|
||||||
|
|
||||||
|
checkout=$(atomic host status --json | python -c 'import json; import sys; j = json.loads(sys.stdin.readline()); print j["deployments"][0]["origin"]')
|
||||||
|
else
|
||||||
|
checkout=$(atomic host status --json | python -c 'import json; import sys; j = json.loads(sys.stdin.readline()); print [x for x in j["deployments"] if x["booted"]][0]["checksum"]')
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Checkout the just upgraded os branch since we'll use it every time
|
||||||
|
# we build a new tree.
|
||||||
|
|
||||||
|
ostree checkout "$checkout" /var/local-tree
|
||||||
|
|
||||||
|
# reduce image size
|
||||||
|
/var/lib/testvm/zero-disk.setup
|
||||||
|
|
||||||
|
# Prevent SSH from hanging for a long time when no external network access
|
||||||
|
echo 'UseDNS no' >> /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# Final tweaks
|
||||||
|
rm -rf /var/log/journal/*
|
||||||
5
bots/images/scripts/lib/base/Dockerfile
Normal file
5
bots/images/scripts/lib/base/Dockerfile
Normal file
|
|
@ -0,0 +1,5 @@
|
||||||
|
FROM fedora:30
|
||||||
|
|
||||||
|
ADD setup.sh /setup.sh
|
||||||
|
|
||||||
|
RUN /setup.sh
|
||||||
5
bots/images/scripts/lib/base/README.md
Normal file
5
bots/images/scripts/lib/base/README.md
Normal file
|
|
@ -0,0 +1,5 @@
|
||||||
|
Cockpit Base
|
||||||
|
===========================
|
||||||
|
|
||||||
|
Simple base container that installs cockpit-ws dependencies. Used in testing
|
||||||
|
and development to speed up container build times.
|
||||||
26
bots/images/scripts/lib/base/setup.sh
Executable file
26
bots/images/scripts/lib/base/setup.sh
Executable file
|
|
@ -0,0 +1,26 @@
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
upgrade() {
|
||||||
|
# https://bugzilla.redhat.com/show_bug.cgi?id=1483553
|
||||||
|
dnf -v -y update 2>err.txt
|
||||||
|
ecode=$?
|
||||||
|
if [ $ecode -ne 0 ] ; then
|
||||||
|
grep -q -F -e "BDB1539 Build signature doesn't match environment" err.txt
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
set -eu
|
||||||
|
rpm --rebuilddb
|
||||||
|
dnf -v -y update
|
||||||
|
else
|
||||||
|
cat err.txt
|
||||||
|
exit ${ecode}
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
upgrade
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
dnf install -y sed findutils glib-networking json-glib libssh openssl python3
|
||||||
|
|
||||||
|
dnf clean all
|
||||||
16
bots/images/scripts/lib/build-deps.sh
Executable file
16
bots/images/scripts/lib/build-deps.sh
Executable file
|
|
@ -0,0 +1,16 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
# Download cockpit.spec, replace `npm-version` macro and then query all build requires
|
||||||
|
curl -s https://raw.githubusercontent.com/cockpit-project/cockpit/master/tools/cockpit.spec |
|
||||||
|
sed 's/%{npm-version:.*}/0/' |
|
||||||
|
sed '/Recommends:/d' |
|
||||||
|
rpmspec -D "$1" --buildrequires --query /dev/stdin |
|
||||||
|
sed 's/.*/"&"/' |
|
||||||
|
tr '\n' ' '
|
||||||
|
|
||||||
|
# support for backbranches
|
||||||
|
if [ "$1" = "rhel 7" ] || [ "$1" = "centos 7" ]; then
|
||||||
|
echo "golang-bin golang-src"
|
||||||
|
fi
|
||||||
35
bots/images/scripts/lib/containers.install
Executable file
35
bots/images/scripts/lib/containers.install
Executable file
|
|
@ -0,0 +1,35 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2016 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
# HACK: docker falls over regularly, print its log if it does
|
||||||
|
systemctl start docker || journalctl -u docker
|
||||||
|
|
||||||
|
for NAME in bastion
|
||||||
|
do
|
||||||
|
mkdir -p "/var/tmp/containers/$NAME/rpms"
|
||||||
|
cp -f /var/tmp/build-results/*.rpm "/var/tmp/containers/$NAME/rpms/"
|
||||||
|
cd "/var/tmp/containers/$NAME/"
|
||||||
|
sed -i -e "s#FROM .*#FROM cockpit/base#" Dockerfile
|
||||||
|
docker build --build-arg OFFLINE=1 -t "cockpit/$NAME" . 1>&2;
|
||||||
|
rm -r "/var/tmp/containers/$NAME/rpms"
|
||||||
|
done
|
||||||
|
|
||||||
|
journalctl --flush || true
|
||||||
|
journalctl --sync || killall systemd-journald || true
|
||||||
|
rm -rf /var/log/journal/* || true
|
||||||
36
bots/images/scripts/lib/debian.bootstrap
Executable file
36
bots/images/scripts/lib/debian.bootstrap
Executable file
|
|
@ -0,0 +1,36 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
BASE=$(dirname $(dirname $0))
|
||||||
|
|
||||||
|
out=$1
|
||||||
|
arch=$2
|
||||||
|
virt_builder_image="$3"
|
||||||
|
if [ -n "$4" ]; then
|
||||||
|
apt_source="$4"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$VIRT_BUILDER_NO_CACHE" == "yes" ]; then
|
||||||
|
virt_builder_caching="--no-cache"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 18.04 virt-builder image has an invalid apt proxy leftover; delete it
|
||||||
|
virt-builder $virt_builder_image \
|
||||||
|
$virt_builder_caching \
|
||||||
|
--output "$out" \
|
||||||
|
--size 8G \
|
||||||
|
--format qcow2 \
|
||||||
|
--arch "$arch" \
|
||||||
|
--root-password password:foobar \
|
||||||
|
--ssh-inject root:file:$BASE/../../machine/identity.pub \
|
||||||
|
--upload $BASE/../../machine/host_key:/etc/ssh/ssh_host_rsa_key \
|
||||||
|
--chmod 0600:/etc/ssh/ssh_host_rsa_key \
|
||||||
|
--upload $BASE/../../machine/host_key.pub:/etc/ssh/ssh_host_rsa_key.pub \
|
||||||
|
${apt_source:+--write /etc/apt/sources.list:"$apt_source"} \
|
||||||
|
--write /etc/apt/apt.conf.d/90nolanguages:'Acquire::Languages "none";' \
|
||||||
|
--run-command "sed -i 's/GRUB_TIMEOUT.*/GRUB_TIMEOUT=0/; /GRUB_CMDLINE_LINUX=/ s/"'"'"$/ console=ttyS0,115200 net.ifnames=0 biosdevname=0"'"'"/' /etc/default/grub" \
|
||||||
|
--run-command "update-grub" \
|
||||||
|
--run-command "sed -i 's/ens[^[:space:]:]*/eth0/' /etc/network/interfaces /etc/netplan/*.yaml || true" \
|
||||||
|
--run-command "rm --verbose -f /etc/apt/apt.conf" \
|
||||||
|
--run-command "export DEBIAN_FRONTEND=noninteractive; apt-get -y update; apt-get -y install eatmydata; eatmydata apt-get -y dist-upgrade"
|
||||||
92
bots/images/scripts/lib/debian.install
Executable file
92
bots/images/scripts/lib/debian.install
Executable file
|
|
@ -0,0 +1,92 @@
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
export DEB_BUILD_OPTIONS=""
|
||||||
|
|
||||||
|
do_build=
|
||||||
|
do_install=
|
||||||
|
stdout_dest="/dev/null"
|
||||||
|
args=$(getopt -o "vqs:" -l "verbose,quick,skip:,build,install" -- "$@")
|
||||||
|
eval set -- "$args"
|
||||||
|
while [ $# -gt 0 ]; do
|
||||||
|
case $1 in
|
||||||
|
-v|--verbose)
|
||||||
|
stdout_dest="/dev/stdout"
|
||||||
|
;;
|
||||||
|
-q|--quick)
|
||||||
|
DEB_BUILD_OPTIONS="$DEB_BUILD_OPTIONS nocheck"
|
||||||
|
;;
|
||||||
|
--build)
|
||||||
|
do_build=t
|
||||||
|
;;
|
||||||
|
--install)
|
||||||
|
do_install=t
|
||||||
|
;;
|
||||||
|
--)
|
||||||
|
shift
|
||||||
|
break
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
shift
|
||||||
|
done
|
||||||
|
tar="$1"
|
||||||
|
|
||||||
|
|
||||||
|
# Build
|
||||||
|
|
||||||
|
if [ -n "$do_build" ]; then
|
||||||
|
rm -rf build-results
|
||||||
|
mkdir build-results
|
||||||
|
resultdir=$PWD/build-results
|
||||||
|
upstream_ver=$(ls cockpit-*.tar.gz | sed 's/^.*-//; s/.tar.gz//' | head -n1)
|
||||||
|
|
||||||
|
ln -sf cockpit-*.tar.gz cockpit_${upstream_ver}.orig.tar.gz
|
||||||
|
|
||||||
|
rm -rf cockpit-*/
|
||||||
|
tar -xzf cockpit-*.tar.gz
|
||||||
|
( cd cockpit-*/
|
||||||
|
cp -rp tools/debian debian
|
||||||
|
# put proper version into changelog, as we have versioned dependencies
|
||||||
|
sed -i "1 s/(.*)/($upstream_ver-1)/" debian/changelog
|
||||||
|
# Hack: Remove PCP build dependencies while pcp is not in testing
|
||||||
|
# (https://tracker.debian.org/pcp)
|
||||||
|
sed -i '/libpcp.*-dev/d' debian/control
|
||||||
|
dpkg-buildpackage -S -uc -us -nc
|
||||||
|
)
|
||||||
|
|
||||||
|
# Some unit tests want a real network interface
|
||||||
|
echo USENETWORK=yes >>~/.pbuilderrc
|
||||||
|
|
||||||
|
# pbuilder < 0.228.6 has broken /dev/pts/ptmx permissions; affects Ubuntu < 17.04
|
||||||
|
# see https://bugs.debian.org/841935
|
||||||
|
if ! grep -q ptmxmode /usr/lib/pbuilder/pbuilder-modules; then
|
||||||
|
echo "Fixing /dev/pts/ptmx mode in pbuilder"
|
||||||
|
sed -i '/mount -t devpts none/ s/$/,ptmxmode=666,newinstance/' /usr/lib/pbuilder/pbuilder-modules
|
||||||
|
fi
|
||||||
|
|
||||||
|
pbuilder build --buildresult "$resultdir" \
|
||||||
|
--logfile "$resultdir/build.log" \
|
||||||
|
cockpit_${upstream_ver}-1.dsc >$stdout_dest
|
||||||
|
lintian $resultdir/cockpit_*_$(dpkg --print-architecture).changes >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install
|
||||||
|
|
||||||
|
if [ -n "$do_install" ]; then
|
||||||
|
packages=$(find build-results -name "*.deb")
|
||||||
|
dpkg --install $packages
|
||||||
|
|
||||||
|
# FIXME: our tests expect cockpit.socket to not be running after boot, only
|
||||||
|
# after start_cockpit().
|
||||||
|
systemctl disable cockpit.socket
|
||||||
|
|
||||||
|
# HACK: tuned breaks QEMU (https://launchpad.net/bugs/1774000)
|
||||||
|
systemctl disable tuned.service 2>/dev/null || true
|
||||||
|
|
||||||
|
firewall-cmd --add-service=cockpit --permanent
|
||||||
|
|
||||||
|
journalctl --flush
|
||||||
|
journalctl --sync || killall systemd-journald
|
||||||
|
rm -rf /var/log/journal/*
|
||||||
|
fi
|
||||||
36
bots/images/scripts/lib/docker-images.setup
Executable file
36
bots/images/scripts/lib/docker-images.setup
Executable file
|
|
@ -0,0 +1,36 @@
|
||||||
|
#!/bin/bash
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2016 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
if [ $(uname -m) = x86_64 ]; then
|
||||||
|
docker pull busybox:latest
|
||||||
|
docker pull busybox:buildroot-2014.02
|
||||||
|
docker pull gcr.io/google_containers/pause:0.8.0
|
||||||
|
docker pull k8s.gcr.io/pause-amd64:3.1
|
||||||
|
# some aliases for different k8s variants
|
||||||
|
docker tag k8s.gcr.io/pause-amd64:3.1 gcr.io/google_containers/pause-amd64:3.0
|
||||||
|
docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Download the i386 image and rename it
|
||||||
|
if [ $(uname -m) = i686 ]; then
|
||||||
|
docker pull i386/busybox:latest
|
||||||
|
docker tag docker.io/i386/busybox busybox
|
||||||
|
docker rmi docker.io/i386/busybox
|
||||||
|
fi
|
||||||
116
bots/images/scripts/lib/fedora.install
Executable file
116
bots/images/scripts/lib/fedora.install
Executable file
|
|
@ -0,0 +1,116 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
# don't update already installed cockpit packages
|
||||||
|
installed=$(rpm --query --all --queryformat "%{NAME}-\[0-9\]\n" "cockpit*")
|
||||||
|
skip="cockpit-doc-[0-9]"
|
||||||
|
if [ -n "$installed" ]; then
|
||||||
|
skip="$skip
|
||||||
|
$installed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
do_build=
|
||||||
|
do_install=
|
||||||
|
# we build RHEL 7.x in a CentOS mock, thus we can't parse os-release in the .spec
|
||||||
|
mock_opts="--define='os_version_id $(. /etc/os-release; echo $VERSION_ID)'"
|
||||||
|
args=$(getopt -o "vqs:" -l "verbose,quick,skip:,build,install,rhel,HACK-no-bootstrap-chroot" -- "$@")
|
||||||
|
eval set -- "$args"
|
||||||
|
while [ $# -gt 0 ]; do
|
||||||
|
case $1 in
|
||||||
|
-v|--verbose)
|
||||||
|
mock_opts="$mock_opts --verbose"
|
||||||
|
;;
|
||||||
|
-q|--quick)
|
||||||
|
mock_opts="$mock_opts --nocheck --define='selinux 0'"
|
||||||
|
;;
|
||||||
|
-s|--skip)
|
||||||
|
skip="$skip
|
||||||
|
$2"
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--build)
|
||||||
|
do_build=t
|
||||||
|
;;
|
||||||
|
--install)
|
||||||
|
do_install=t
|
||||||
|
;;
|
||||||
|
--rhel)
|
||||||
|
# For RHEL we actually build in EPEL, which is based
|
||||||
|
# on CentOS. On CentOS, the spec file has both
|
||||||
|
# %centos and %rhel defined, but it gives precedence
|
||||||
|
# to %centos, as it must. To make it produce the RHEL
|
||||||
|
# packages, we explicitly undefine %centos here.
|
||||||
|
mock_opts="$mock_opts --define='centos 0'"
|
||||||
|
;;
|
||||||
|
--HACK-no-bootstrap-chroot)
|
||||||
|
mock_opts="$mock_opts --no-bootstrap-chroot"
|
||||||
|
;;
|
||||||
|
--)
|
||||||
|
shift
|
||||||
|
break
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
shift
|
||||||
|
done
|
||||||
|
tar=$1
|
||||||
|
|
||||||
|
# Build
|
||||||
|
|
||||||
|
if [ -n "$do_build" ]; then
|
||||||
|
# Some tests need a non-loopback internet address, so we allow
|
||||||
|
# networking during build. Note that we use "--offline" below, so
|
||||||
|
# we should still be protected against unexpected package
|
||||||
|
# installations.
|
||||||
|
echo "config_opts['rpmbuild_networking'] = True" >>/etc/mock/site-defaults.cfg
|
||||||
|
# don't destroy the mock after building, we want to run rpmlint
|
||||||
|
echo "config_opts['cleanup_on_success'] = False" >>/etc/mock/site-defaults.cfg
|
||||||
|
# HACK: don't fall over on unavailable repositories, as we are offline
|
||||||
|
# (https://bugzilla.redhat.com/show_bug.cgi?id=1549291)
|
||||||
|
sed --follow-symlinks -i '/skip_if_unavailable=False/d' /etc/mock/default.cfg
|
||||||
|
|
||||||
|
rm -rf build-results
|
||||||
|
srpm=$(/var/lib/testvm/make-srpm "$tar")
|
||||||
|
LC_ALL=C.UTF-8 su builder -c "/usr/bin/mock --offline --no-clean --resultdir build-results $mock_opts --rebuild $srpm"
|
||||||
|
|
||||||
|
su builder -c "/usr/bin/mock --offline --shell" <<EOF
|
||||||
|
rm -rf /builddir/build
|
||||||
|
if type rpmlint >/dev/null 2>&1; then
|
||||||
|
# blacklist "E: no-changelogname-tag" rpmlint error, expected due to our template cockpit.spec
|
||||||
|
mkdir -p ~/.config
|
||||||
|
echo 'addFilter("E: no-changelogname-tag")' > ~/.config/rpmlint
|
||||||
|
# we expect the srpm to be clean
|
||||||
|
echo
|
||||||
|
echo '====== rpmlint on srpm ====='
|
||||||
|
rpmlint /builddir/build/SRPMS/*.src.rpm
|
||||||
|
# this still has lots of errors, run it for information only
|
||||||
|
echo
|
||||||
|
echo '====== rpmlint binary rpms (advisory) ====='
|
||||||
|
rpmlint /builddir/build/RPMS/ || true
|
||||||
|
else
|
||||||
|
echo '====== skipping rpmlint check, not installed ====='
|
||||||
|
fi
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install
|
||||||
|
|
||||||
|
if [ -n "$do_install" ]; then
|
||||||
|
packages=$(find build-results -name "*.rpm" -not -name "*.src.rpm" | grep -vG "$skip")
|
||||||
|
rpm -U --force $packages
|
||||||
|
|
||||||
|
if type firewall-cmd > /dev/null 2> /dev/null; then
|
||||||
|
systemctl start firewalld
|
||||||
|
firewall-cmd --add-service=cockpit --permanent
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Make sure we clean out the journal
|
||||||
|
journalctl --flush
|
||||||
|
journalctl --sync || killall systemd-journald
|
||||||
|
rm -rf /var/log/journal/*
|
||||||
|
rm -rf /var/lib/NetworkManager/dhclient-*.lease
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$do_build" ]; then
|
||||||
|
su builder -c "/usr/bin/mock --clean"
|
||||||
|
fi
|
||||||
46
bots/images/scripts/lib/kubernetes.setup
Executable file
46
bots/images/scripts/lib/kubernetes.setup
Executable file
|
|
@ -0,0 +1,46 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Kubernetes is delivered in a non-functional state on Fedora and similar operating systems
|
||||||
|
# The following commands are needed to get it running.
|
||||||
|
|
||||||
|
cd /etc/kubernetes/
|
||||||
|
|
||||||
|
cat <<EOF > openssl.conf
|
||||||
|
oid_section = new_oids
|
||||||
|
[new_oids]
|
||||||
|
[req]
|
||||||
|
encrypt_key = no
|
||||||
|
string_mask = nombstr
|
||||||
|
req_extensions = v3_req
|
||||||
|
distinguished_name = v3_name
|
||||||
|
[v3_name]
|
||||||
|
commonName = kubernetes
|
||||||
|
[v3_req]
|
||||||
|
basicConstraints = CA:FALSE
|
||||||
|
subjectAltName = @alt_names
|
||||||
|
[alt_names]
|
||||||
|
DNS.1 = kubernetes
|
||||||
|
DNS.2 = kubernetes.default
|
||||||
|
DNS.3 = kubernetes.default.svc
|
||||||
|
DNS.4 = kubernetes.default.svc.cluster.local
|
||||||
|
IP.1 = 127.0.0.1
|
||||||
|
IP.2 = 10.254.0.1
|
||||||
|
EOF
|
||||||
|
|
||||||
|
openssl genrsa -out ca.key 2048
|
||||||
|
openssl req -x509 -new -nodes -key ca.key -days 3072 -out ca.crt -subj '/CN=kubernetes'
|
||||||
|
openssl genrsa -out server.key 2048
|
||||||
|
openssl req -config openssl.conf -new -key server.key -out server.csr -subj '/CN=kubernetes'
|
||||||
|
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 3072 -extensions v3_req -extfile openssl.conf
|
||||||
|
# make keys readable for "kube" group and thus for kube-apiserver.service on newer OSes
|
||||||
|
if getent group kube >/dev/null; then
|
||||||
|
chgrp kube ca.key server.key
|
||||||
|
chmod 640 ca.key server.key
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo -e '{"user":"admin"}\n{"user":"scruffy","readonly": true}' > /etc/kubernetes/authorization
|
||||||
|
echo -e 'fubar,admin,10101\nscruffy,scruffy,10102' > /etc/kubernetes/passwd
|
||||||
|
|
||||||
|
echo 'KUBE_API_ARGS="--service-account-key-file=/etc/kubernetes/server.key --client-ca-file=/etc/kubernetes/ca.crt --tls-cert-file=/etc/kubernetes/server.crt --tls-private-key-file=/etc/kubernetes/server.key --basic-auth-file=/etc/kubernetes/passwd --authorization-mode=ABAC --authorization-policy-file=/etc/kubernetes/authorization"' >> apiserver
|
||||||
|
echo 'KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/etc/kubernetes/ca.crt --service-account-private-key-file=/etc/kubernetes/server.key"' >> controller-manager
|
||||||
|
|
||||||
33
bots/images/scripts/lib/make-srpm
Executable file
33
bots/images/scripts/lib/make-srpm
Executable file
|
|
@ -0,0 +1,33 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
tar=$1
|
||||||
|
|
||||||
|
version=$(echo "$1" | sed -n 's|.*cockpit-\([^ /-]\+\)\.tar\..*|\1|p')
|
||||||
|
if [ -z "$version" ]; then
|
||||||
|
echo "make-srpm: couldn't parse version from tarball: $1"
|
||||||
|
exit 2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# We actually modify the spec so that the srpm is standalone buildable
|
||||||
|
modify_spec() {
|
||||||
|
sed -e "/^Version:.*/d" -e "1i\
|
||||||
|
%define wip wip\nVersion: $version\n"
|
||||||
|
}
|
||||||
|
|
||||||
|
tmpdir=$(mktemp -d $PWD/srpm-build.XXXXXX)
|
||||||
|
tar xaf "$1" -O cockpit-$version/tools/cockpit.spec | modify_spec > $tmpdir/cockpit.spec
|
||||||
|
|
||||||
|
rpmbuild -bs \
|
||||||
|
--quiet \
|
||||||
|
--define "_sourcedir $(dirname $1)" \
|
||||||
|
--define "_specdir $tmpdir" \
|
||||||
|
--define "_builddir $tmpdir" \
|
||||||
|
--define "_srcrpmdir `pwd`" \
|
||||||
|
--define "_rpmdir $tmpdir" \
|
||||||
|
--define "_buildrootdir $tmpdir/.build" \
|
||||||
|
$tmpdir/cockpit.spec
|
||||||
|
|
||||||
|
rpm --qf '%{Name}-%{Version}-%{Release}.src.rpm\n' -q --specfile $tmpdir/cockpit.spec | head -n1
|
||||||
|
rm -rf $tmpdir
|
||||||
BIN
bots/images/scripts/lib/pubring.gpg
Normal file
BIN
bots/images/scripts/lib/pubring.gpg
Normal file
Binary file not shown.
BIN
bots/images/scripts/lib/secring.gpg
Normal file
BIN
bots/images/scripts/lib/secring.gpg
Normal file
Binary file not shown.
51
bots/images/scripts/lib/zero-disk.setup
Executable file
51
bots/images/scripts/lib/zero-disk.setup
Executable file
|
|
@ -0,0 +1,51 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# This file is part of Cockpit.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2016 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Cockpit is free software; you can redistribute it and/or modify it
|
||||||
|
# under the terms of the GNU Lesser General Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2.1 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Cockpit is distributed in the hope that it will be useful, but
|
||||||
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||||
|
# Lesser General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU Lesser General Public License
|
||||||
|
# along with Cockpit; If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
# We don't want to delete the pbuilder caches since we need them
|
||||||
|
# during build. Mock with --offline and dnf is happy without caches,
|
||||||
|
# but with yum it isn't, so we provide an option to also leave the
|
||||||
|
# mock caches in place.
|
||||||
|
#
|
||||||
|
# We also want to keep cracklib since otherwise password quality
|
||||||
|
# checks break on Debian.
|
||||||
|
|
||||||
|
if [ -f /root/.skip-zero-disk ]; then
|
||||||
|
echo "Skipping zero-disk.setup as /root/.skip-zero-disk exists"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
keep="! -path /var/cache/pbuilder ! -path /var/cache/cracklib ! -path /var/cache/tomcat"
|
||||||
|
while [ $# -gt 0 ]; do
|
||||||
|
case $1 in
|
||||||
|
--keep-mock-cache)
|
||||||
|
keep="$keep ! -path /var/cache/mock"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
shift
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -d "/var/cache" ]; then
|
||||||
|
find /var/cache/* -maxdepth 0 -depth -name "*" $keep -exec rm -rf {} \;
|
||||||
|
fi
|
||||||
|
rm -rf /var/tmp/*
|
||||||
|
rm -rf /var/log/journal/*
|
||||||
|
|
||||||
|
dd if=/dev/zero of=/root/junk || true
|
||||||
|
sync
|
||||||
|
rm -f /root/junk
|
||||||
3
bots/images/scripts/network-ifcfg-eth0
Normal file
3
bots/images/scripts/network-ifcfg-eth0
Normal file
|
|
@ -0,0 +1,3 @@
|
||||||
|
BOOTPROTO="dhcp"
|
||||||
|
DEVICE="eth0"
|
||||||
|
ONBOOT="yes"
|
||||||
3
bots/images/scripts/network-ifcfg-eth1
Normal file
3
bots/images/scripts/network-ifcfg-eth1
Normal file
|
|
@ -0,0 +1,3 @@
|
||||||
|
BOOTPROTO="none"
|
||||||
|
DEVICE="eth1"
|
||||||
|
ONBOOT="no"
|
||||||
4
bots/images/scripts/openshift.bootstrap
Executable file
4
bots/images/scripts/openshift.bootstrap
Executable file
|
|
@ -0,0 +1,4 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
BOOTSTRAP_VOLUME_SIZE="20G" $BASE/virt-builder-fedora "$1" fedora-28 x86_64
|
||||||
2
bots/images/scripts/openshift.install
Executable file
2
bots/images/scripts/openshift.install
Executable file
|
|
@ -0,0 +1,2 @@
|
||||||
|
#!/bin/sh
|
||||||
|
# By default this does nothing
|
||||||
334
bots/images/scripts/openshift.setup
Executable file
334
bots/images/scripts/openshift.setup
Executable file
|
|
@ -0,0 +1,334 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -eux
|
||||||
|
|
||||||
|
# Wait for x for many minutes
|
||||||
|
function wait() {
|
||||||
|
for i in $(seq 1 100); do
|
||||||
|
if eval "$@"; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
sleep 6
|
||||||
|
done
|
||||||
|
exit 6
|
||||||
|
}
|
||||||
|
|
||||||
|
function docker_images_has() {
|
||||||
|
docker images | tr -s ' ' | cut -d ' ' --output-delimiter=: -f1,2 | grep -q "$1"
|
||||||
|
}
|
||||||
|
|
||||||
|
function docker_pull() {
|
||||||
|
docker pull $1
|
||||||
|
echo "$1" >> /tmp/pulledImages
|
||||||
|
docker_images_has $1
|
||||||
|
}
|
||||||
|
rm -f /tmp/pulledImages # will be populated by pulled images names
|
||||||
|
|
||||||
|
# Cleanup the file system a bit
|
||||||
|
rm -rf /var/cache/dnf /var/cache/yum
|
||||||
|
xfs_growfs /
|
||||||
|
|
||||||
|
echo foobar | passwd --stdin root
|
||||||
|
|
||||||
|
nmcli con add con-name "static-eth1" ifname eth1 type ethernet ip4 "10.111.112.101/20" gw4 10.111.112.1 ipv4.dns "10.111.112.1"
|
||||||
|
nmcli con up "static-eth1"
|
||||||
|
|
||||||
|
echo "10.111.112.101 f1.cockpit.lan" >> /etc/hosts
|
||||||
|
|
||||||
|
printf "OPENSHIFT CONSOLE\n https://10.111.112.101:8443\n Login: scruffy Password: scruffy\n\n" >> /etc/issue
|
||||||
|
printf "OPENSHIFT LISTENING ON LOCALHOST\n $ ssh -NL 8443:localhost:8443 root@10.111.112.101\n\n" >> /etc/issue
|
||||||
|
|
||||||
|
# Disable these things
|
||||||
|
ln -sf ../selinux/config /etc/sysconfig/selinux
|
||||||
|
printf 'SELINUX=permissive\nSELINUXTYPE=targeted\n' > /etc/selinux/config
|
||||||
|
setenforce 0
|
||||||
|
systemctl stop firewalld
|
||||||
|
dnf mark install iptables
|
||||||
|
dnf -y remove firewalld
|
||||||
|
iptables -F
|
||||||
|
|
||||||
|
wait dnf -y install docker python libselinux-python
|
||||||
|
|
||||||
|
hostnamectl set-hostname f1.cockpit.lan
|
||||||
|
|
||||||
|
# Setup a nfs server
|
||||||
|
wait dnf install -y nfs-utils
|
||||||
|
mkdir /nfsexport
|
||||||
|
echo "/nfsexport *(rw,sync)" > /etc/exports
|
||||||
|
|
||||||
|
# This name is put into /etc/hosts later
|
||||||
|
echo "INSECURE_REGISTRY='--insecure-registry registry:5000'" >> /etc/sysconfig/docker
|
||||||
|
systemctl enable docker
|
||||||
|
|
||||||
|
# HACK: docker falls over regularly, print its log if it does
|
||||||
|
systemctl start docker || journalctl -u docker
|
||||||
|
|
||||||
|
# Can't use latest because release on older versions are done out of order
|
||||||
|
RELEASES_JSON=$(curl -s https://api.github.com/repos/openshift/origin/releases)
|
||||||
|
set +x
|
||||||
|
VERSION=$(echo "$RELEASES_JSON" | LC_ALL=C.UTF-8 python3 -c "import json, sys, distutils.version; obj=json.load(sys.stdin); releases = [x.get('tag_name', '') for x in obj if not x.get('prerelease')]; print(sorted (releases, reverse=True, key=distutils.version.LooseVersion)[0])") || {
|
||||||
|
echo "Failed to parse latest release:" >&2
|
||||||
|
echo "$RELEASES_JSON" >&2
|
||||||
|
echo "------------------------------------" >&2
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
set -x
|
||||||
|
|
||||||
|
# origin is too rotund to build in a normal sized VM. The linker
|
||||||
|
# step runs out of memory. In addition origin has no Fedora packages
|
||||||
|
docker_pull "openshift/origin:$VERSION"
|
||||||
|
docker run --rm --entrypoint tar "openshift/origin:$VERSION" -C /usr/bin -c openshift oc kubectl | tar -C /usr/bin -xv
|
||||||
|
|
||||||
|
# Runs a master if on the right address, otherwise runs a node
|
||||||
|
cat > /openshift-prep <<EOF
|
||||||
|
#!/bin/sh -ex
|
||||||
|
/usr/bin/hostnamectl set-hostname f1.cockpit.lan
|
||||||
|
/usr/bin/systemctl enable rpcbind
|
||||||
|
/usr/bin/systemctl start rpcbind
|
||||||
|
/usr/bin/systemctl start nfs-server
|
||||||
|
cmd="/usr/bin/openshift start --master=10.111.112.101 --listen=https://0.0.0.0:8443"
|
||||||
|
echo "#!/bin/sh -ex
|
||||||
|
\$cmd" > /openshift-run
|
||||||
|
EOF
|
||||||
|
|
||||||
|
chmod +x /openshift-prep
|
||||||
|
touch /openshift-run
|
||||||
|
chmod +x /openshift-run
|
||||||
|
|
||||||
|
cat > /etc/systemd/system/openshift.service <<EOF
|
||||||
|
[Unit]
|
||||||
|
Description=Openshift
|
||||||
|
Wants=network-online.target
|
||||||
|
After=network-online.target docker.service
|
||||||
|
Requires=docker.service
|
||||||
|
[Service]
|
||||||
|
ExecStartPre=/openshift-prep
|
||||||
|
ExecStart=/openshift-run
|
||||||
|
Restart=always
|
||||||
|
RestartSec=60
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl enable systemd-networkd-wait-online
|
||||||
|
systemctl enable openshift
|
||||||
|
systemctl start openshift || journalctl -u openshift
|
||||||
|
|
||||||
|
# Now pull all the images we're going to use with openshift
|
||||||
|
docker_pull "openshift/origin-deployer:$VERSION"
|
||||||
|
docker_pull "openshift/origin-docker-registry:$VERSION"
|
||||||
|
docker_pull "openshift/origin-pod:$VERSION"
|
||||||
|
|
||||||
|
# Now pull images used for integration tests
|
||||||
|
docker_pull registry:2
|
||||||
|
|
||||||
|
# HACK: Make openshift registry recognize docker registrys with the OpenShift CA
|
||||||
|
# (https://github.com/openshift/origin/issues/1753)
|
||||||
|
mkdir /tmp/registry
|
||||||
|
cd /tmp/registry
|
||||||
|
cat << EOF > Dockerfile
|
||||||
|
FROM openshift/origin-docker-registry:$VERSION
|
||||||
|
ADD *.crt /etc/pki/ca-trust/source/anchors/
|
||||||
|
USER 0
|
||||||
|
RUN update-ca-trust extract
|
||||||
|
USER 1001
|
||||||
|
EOF
|
||||||
|
cp /openshift.local.config/master/ca.crt openshift-ca.crt
|
||||||
|
docker build --tag openshift/origin-docker-registry:$VERSION .
|
||||||
|
cd /tmp/
|
||||||
|
rm -r /tmp/registry
|
||||||
|
cp /openshift.local.config/master/ca.crt /etc/pki/ca-trust/source/anchors/openshift-ca.crt
|
||||||
|
update-ca-trust extract
|
||||||
|
|
||||||
|
# HACK: Work around GnuTLS (client-side) or Go TLS (server-side) bug with
|
||||||
|
# multiple O= RDNs; if it's in the "wrong" order, create a new admin
|
||||||
|
# certificate that swaps it around
|
||||||
|
# See https://github.com/openshift/origin/issues/18715
|
||||||
|
dnf install -y openssl
|
||||||
|
if openssl x509 -in /openshift.local.config/master/admin.crt -text | grep -q 'Subject:.*system:cluster-admins.*system:masters'; then
|
||||||
|
echo "Regenerating admin certificate to work around https://github.com/openshift/origin/issues/18715"
|
||||||
|
pushd /openshift.local.config/master/
|
||||||
|
mv admin.key admin.key.orig
|
||||||
|
mv admin.crt admin.crt.orig
|
||||||
|
mv admin.kubeconfig admin.kubeconfig.orig
|
||||||
|
openssl genrsa -out admin.key 2048
|
||||||
|
openssl req -new -nodes -key admin.key -out admin.csr -subj '/O=system:masters/O=system:cluster-admins/CN=system:admin'
|
||||||
|
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 730 -out admin.crt
|
||||||
|
rm admin.csr
|
||||||
|
oc adm create-kubeconfig --certificate-authority=ca.crt --client-certificate=admin.crt --client-key=admin.key --master="https://10.111.112.101:8443" --kubeconfig=admin.kubeconfig
|
||||||
|
popd
|
||||||
|
fi
|
||||||
|
|
||||||
|
mkdir -p /root/.kube
|
||||||
|
cp /openshift.local.config/master/admin.kubeconfig /root/.kube/config
|
||||||
|
|
||||||
|
# Check if we can connect to openshift
|
||||||
|
wait oc get namespaces
|
||||||
|
|
||||||
|
wait oc get scc/restricted
|
||||||
|
|
||||||
|
# Tell openshift to allow root containers by default. Otherwise most
|
||||||
|
# development examples just plain fail to work
|
||||||
|
oc patch scc restricted -p '{ "runAsUser": { "type": "RunAsAny" } }'
|
||||||
|
|
||||||
|
# Tell openshift to allow logins from the openshift web console on a localhost system
|
||||||
|
oc patch oauthclient/openshift-web-console -p '{"redirectURIs":["https://10.111.112.101:8443/console/", "https://localhost:9000/"]}'
|
||||||
|
|
||||||
|
# Deploy the registry
|
||||||
|
# --credentials deprecated
|
||||||
|
rm -rf /usr/share/rhel/secrets
|
||||||
|
oc adm registry
|
||||||
|
|
||||||
|
function endpoint_has_address() {
|
||||||
|
oc get endpoints $1 --template='{{.subsets}}' | grep -q addresses
|
||||||
|
}
|
||||||
|
|
||||||
|
function images_has() {
|
||||||
|
oc get images | grep -q "$1"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Wait for registry deployment to happen
|
||||||
|
wait oc get endpoints docker-registry
|
||||||
|
wait endpoint_has_address docker-registry
|
||||||
|
|
||||||
|
# Load in some remote images
|
||||||
|
echo '{"apiVersion":"v1","kind":"ImageStream","metadata": {"name":"busybox"},"spec":{"dockerImageRepository": "busybox"}}' > /tmp/imagestream.json
|
||||||
|
oc create -f /tmp/imagestream.json
|
||||||
|
|
||||||
|
# Get registry address and configure docker for it
|
||||||
|
address="$(oc get services docker-registry | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')"
|
||||||
|
echo "$address registry registry.cockpit.lan" >> /etc/hosts
|
||||||
|
echo "INSECURE_REGISTRY='--insecure-registry registry:5000 --insecure-registry $address'" >> /etc/sysconfig/docker
|
||||||
|
|
||||||
|
# Log in as another user
|
||||||
|
printf "scruffy\r\nscruffy\r\n" | oc login
|
||||||
|
oc new-project marmalade
|
||||||
|
|
||||||
|
token=$(oc whoami -t)
|
||||||
|
docker login -p "$token" -u unneeded registry:5000
|
||||||
|
|
||||||
|
echo '{"apiVersion":"v1","kind":"ImageStream","metadata": {"name":"busybee"}}' > /tmp/imagestream.json
|
||||||
|
oc create -f /tmp/imagestream.json
|
||||||
|
echo '{"apiVersion":"v1","kind":"ImageStream","metadata": {"name":"juggs"}}' > /tmp/imagestream.json
|
||||||
|
oc create -f /tmp/imagestream.json
|
||||||
|
echo '{"apiVersion":"v1","kind":"ImageStream","metadata": {"name":"origin"}}' > /tmp/imagestream.json
|
||||||
|
oc create -f /tmp/imagestream.json
|
||||||
|
|
||||||
|
# Get ready to push busybox into place
|
||||||
|
docker_pull busybox
|
||||||
|
docker tag busybox registry:5000/marmalade/busybee:latest
|
||||||
|
docker tag busybox registry:5000/marmalade/busybee:0.x
|
||||||
|
docker push registry:5000/marmalade/busybee
|
||||||
|
|
||||||
|
mkdir /tmp/juggs
|
||||||
|
cd /tmp/juggs
|
||||||
|
printf '#!/bin/sh\necho hello from container\nsleep 100000\n' > echo-script
|
||||||
|
printf 'FROM busybox\nMAINTAINER cockpit@example.com\nEXPOSE 8888\nADD echo-script /\nRUN chmod +x /echo-script\nCMD \"/echo-script\"' > Dockerfile
|
||||||
|
docker build -t registry:5000/marmalade/juggs:latest .
|
||||||
|
printf "FROM registry:5000/marmalade/juggs:latest\nVOLUME /test\nVOLUME /another\nWORKDIR /tmp" > Dockerfile
|
||||||
|
docker build -t registry:5000/marmalade/juggs:2.11 .
|
||||||
|
cp /usr/bin/openshift .
|
||||||
|
printf "FROM registry:5000/marmalade/juggs:latest\nADD openshift /usr/bin\nUSER nobody:wheel\nENTRYPOINT [\"top\", \"-b\"]\nCMD [\"-c\"]" > Dockerfile
|
||||||
|
docker build -t registry:5000/marmalade/juggs:2.5 .
|
||||||
|
printf "FROM registry:5000/marmalade/juggs:2.5\nSTOPSIGNAL SIGKILL\nONBUILD ADD . /app/src\nARG hello=test\nARG simple\nLABEL Test=Value\nLABEL version=\"1.0\"" > Dockerfile
|
||||||
|
docker build -t registry:5000/marmalade/juggs:2.8 .
|
||||||
|
printf "FROM registry:5000/marmalade/juggs:2.8\nLABEL description=\"This is a test description of an image. It can be as long as a paragraph, featuring a nice brogrammer sales pitch.\"\nLABEL name=\"Juggs Image\"\nLABEL build-date=2016-03-04\nLABEL url=\"http://hipsum.co/\"" > Dockerfile
|
||||||
|
docker build -t registry:5000/marmalade/juggs:2.9 .
|
||||||
|
cd /tmp
|
||||||
|
rm -r /tmp/juggs
|
||||||
|
|
||||||
|
docker push registry:5000/marmalade/juggs
|
||||||
|
|
||||||
|
# Tag this image twice
|
||||||
|
docker tag docker.io/busybox:latest registry:5000/marmalade/origin
|
||||||
|
docker push registry:5000/marmalade/origin
|
||||||
|
docker tag "openshift/origin:$VERSION" registry:5000/marmalade/origin
|
||||||
|
docker push registry:5000/marmalade/origin
|
||||||
|
|
||||||
|
oc new-project pizzazz
|
||||||
|
|
||||||
|
# Some big image streams
|
||||||
|
for i in $(seq 1 15); do
|
||||||
|
for j in $(seq 1 10); do
|
||||||
|
docker tag docker.io/busybox:latest registry:5000/pizzazz/stream$i:tag$j
|
||||||
|
done
|
||||||
|
docker push registry:5000/pizzazz/stream$i
|
||||||
|
done
|
||||||
|
|
||||||
|
# And a monster sized one
|
||||||
|
for j in $(seq 1 100); do
|
||||||
|
docker tag docker.io/busybox:latest registry:5000/pizzazz/monster:tag$j
|
||||||
|
done
|
||||||
|
docker push registry:5000/pizzazz/monster
|
||||||
|
|
||||||
|
# Use the admin context by default
|
||||||
|
oc config use-context default/10-111-112-101:8443/system:admin
|
||||||
|
|
||||||
|
# Some roles for testing against
|
||||||
|
printf '{"kind":"List","apiVersion":"v1","items":[{"kind":"RoleBinding","apiVersion":"v1","metadata":{"name":"registry-editor","namespace":"marmalade","resourceVersion":"1"},"userNames":["scruffy","amanda"],"groupNames":null,"subjects":[{"kind":"User","name":"scruffy"},{"kind":"User","name":"amanda"}],"roleRef":{"name":"registry-editor"}},{"kind":"RoleBinding","apiVersion":"v1","metadata":{"name":"registry-viewer","namespace":"marmalade","resourceVersion":"1"},"userNames":["scruffy","tom","amanda"],"groupNames":["sports"],"subjects":[{"kind":"User","name":"scruffy"},{"kind":"User","name":"tom"},{"kind":"User","name":"amanda"},{"kind":"Group","name":"sports"}],"roleRef":{"name":"registry-viewer"}}]}' | oc create -f -
|
||||||
|
oc patch rolebinding/admin --namespace=marmalade -p '{"kind": "RoleBinding", "metadata":{"name":"admin","namespace":"marmalade"},"userNames":["scruffy"],"groupNames":null,"subjects":[{"kind":"User","name":"scruffys"}],"roleRef":{"name":"admin"}}' || true
|
||||||
|
|
||||||
|
# For testing the Cockpit OAuth client
|
||||||
|
printf '{"kind":"OAuthClient","apiVersion":"v1","metadata":{"name":"cockpit-oauth-devel"},"respondWithChallenges":false,"secret":"secret","allowAnyScope":true,"redirectURIs":["http://localhost:9001"] }' | oc create -f -
|
||||||
|
|
||||||
|
# Wait for it to download
|
||||||
|
wait images_has busybox
|
||||||
|
|
||||||
|
# Setup basics for building images
|
||||||
|
docker build -t cockpit/base /var/tmp/cockpit-base
|
||||||
|
|
||||||
|
# Print out the kubeconfig file for copy paste
|
||||||
|
echo "---------------------------------------------------------------"
|
||||||
|
cat /root/.kube/config
|
||||||
|
|
||||||
|
# Wait a bit in case an operator wants to copy some info
|
||||||
|
sleep 20
|
||||||
|
|
||||||
|
# Use standard locations for kubelet kubeconfig. f1.cockpit.lan is the master hostname, which
|
||||||
|
# is its own node and we just copy that for the others
|
||||||
|
mkdir -p /var/lib/kubelet
|
||||||
|
cp /openshift.local.config/node-f1.cockpit.lan/node.kubeconfig /var/lib/kubelet/kubeconfig
|
||||||
|
|
||||||
|
# Turn this on in sshd_config, not in use until binary is in place
|
||||||
|
printf 'AuthorizedKeysCommand /usr/local/bin/authorized-kube-keys --kubeconfig=/var/lib/kubelet/kubeconfig\nAuthorizedKeysCommandUser root' >> /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# Pull down remaining images
|
||||||
|
/var/lib/testvm/docker-images.setup
|
||||||
|
|
||||||
|
dnf install -y cockpit-system
|
||||||
|
|
||||||
|
docker info
|
||||||
|
|
||||||
|
# reduce image size
|
||||||
|
dnf clean all
|
||||||
|
|
||||||
|
systemctl stop docker
|
||||||
|
# write all changes before filling the disk
|
||||||
|
sync
|
||||||
|
/var/lib/testvm/zero-disk.setup
|
||||||
|
systemctl start docker && sleep 10
|
||||||
|
|
||||||
|
# Verify all pulled docker images are really present
|
||||||
|
echo All present images:
|
||||||
|
docker images
|
||||||
|
echo "Total docker images:"
|
||||||
|
docker images | wc
|
||||||
|
|
||||||
|
docker images --format "{{.Repository}}:{{.Tag}}" > /tmp/presentImages
|
||||||
|
|
||||||
|
echo
|
||||||
|
echo All images actually pulled
|
||||||
|
cat /tmp/presentImages
|
||||||
|
echo
|
||||||
|
|
||||||
|
echo
|
||||||
|
echo All images expected to be pulled
|
||||||
|
cat /tmp/pulledImages
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Verify all expected are actually pulled
|
||||||
|
while read img ; do
|
||||||
|
echo Verify "$img"
|
||||||
|
grep "$img" /tmp/presentImages || (echo "Error: Image $img is missing" && exit 10)
|
||||||
|
done < /tmp/pulledImages
|
||||||
1
bots/images/scripts/ovirt.bootstrap
Symbolic link
1
bots/images/scripts/ovirt.bootstrap
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
centos-7.bootstrap
|
||||||
5
bots/images/scripts/ovirt.install
Executable file
5
bots/images/scripts/ovirt.install
Executable file
|
|
@ -0,0 +1,5 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
/var/lib/testvm/fedora.install "$@"
|
||||||
10
bots/images/scripts/rhel-7-7.bootstrap
Executable file
10
bots/images/scripts/rhel-7-7.bootstrap
Executable file
|
|
@ -0,0 +1,10 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
if [ -z "$SUBSCRIPTION_PATH" ] && [ -e ~/.rhel/login ]; then
|
||||||
|
SUBSCRIPTION_PATH=~/.rhel
|
||||||
|
fi
|
||||||
|
|
||||||
|
BASE=$(dirname $0)
|
||||||
|
$BASE/virt-install-fedora "$1" x86_64 "http://download.eng.bos.redhat.com/nightly/latest-RHEL-7.7/compose/Server/x86_64/os/" $SUBSCRIPTION_PATH
|
||||||
8
bots/images/scripts/rhel-7-7.install
Executable file
8
bots/images/scripts/rhel-7-7.install
Executable file
|
|
@ -0,0 +1,8 @@
|
||||||
|
#! /bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# remove cockpit distro packages, testing with upstream master
|
||||||
|
rpm --erase --verbose cockpit cockpit-ws cockpit-bridge cockpit-system
|
||||||
|
|
||||||
|
/var/lib/testvm/fedora.install --rhel "$@"
|
||||||
1
bots/images/scripts/rhel-7-7.setup
Symbolic link
1
bots/images/scripts/rhel-7-7.setup
Symbolic link
|
|
@ -0,0 +1 @@
|
||||||
|
rhel.setup
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue