Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • fornari/ngx_http_voms_module
  • cnafsd/ngx_http_voms_module
2 results
Show changes
Commits on Source (215)
Showing
with 478 additions and 275 deletions
# Copyright 2018-2023 Istituto Nazionale di Fisica Nucleare
# SPDX-License-Identifier: EUPL-1.2
FROM almalinux:9
# Allow customization of build user ID and name
ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=${USER_UID}
COPY library-scripts/*.sh /tmp/library-scripts/
RUN \
sh /tmp/library-scripts/provide-dev-deps.sh && \
sh /tmp/library-scripts/provide-user.sh ${USERNAME} ${USER_UID} ${USER_GID} && \
dnf clean all && rm -rf /var/cache/dnf
USER $USERNAME
# `ngx_http_voms_module` for developers
A devcontainer is ready to use for the developers. A set of packages without nginx are already installed.
## How to build and install nginx with or without httpg patch
To build and install the latest stable version of [nginx](http://nginx.org/en/download.html) you have to copy the `nginx.repo` file (it is contained in the `docker` directory) into the `/etc/yum.repos.d/` directory and install nginx with `yum`:
```shell
$ sudo cp docker/nginx.repo /etc/yum.repos.d/
$ sudo yum install -y nginx
```
Otherwise, if you want to build and install the latest stable version of [nginx](http://nginx.org/en/download.html) with the httpg patch, a bash library is ready to use. You can source it and follow the commands below:
```shell
$ source .devcontainer/assets/build-library.sh
$ downloadNginx
$ buildHttpgNginxRPM
$ sudo rpm -ivh ~/rpmbuild/RPMS/x86_64/nginx-*.httpg.x86_64.rpm
```
## How to build and install the `ngx_http_voms_module`
If you want to build and install the `ngx_http_voms_module`, nginx have to be installed in the container (see the previous section). When this requirement is satisfied, you can use the library contained in the `.devcontainer/assets` folder as follows (NOTE: if you have already download nginx source file, you can skip the relative command):
```shell
$ source .devcontainer/assets/build-library.sh
$ downloadNginx
$ buildVomsModuleRPM
$ sudo rpm -ivh ~/rpmbuild/RPMS/x86_64/nginx-module-http-voms-*.x86_64.rpm
```
## How to manage this project
If you want to understand how this project works, start from the CI. Three stages are defined:
### 1. build-rpms
Starting from a clear AlmaLinux 9, we install all the useful packages to compile nginx and to build a rpm package. The bash steps that achieves these results are defined in the `.devcontainer/assets/build-library.sh` file, so you can read that bash script to learn which nginx version we use, how to download it, how to set up the environment and how to build the rpm.
It is important to underline that to build the nginx rpm we use the spec file in the `rpm` repo, that is the official nginx 1.24.0 spec file increased by the HTTPG patch. To build the `ngx_http_voms_module` we have defined an appropriate spec file indeed. The files that are used to build the rpm module are written, called and collocated following the common practices of the nginx modules: a source file is defined in the `src` folder, the `config` and the `config.make` files are in the root project directory.
At the end of this stage, all the useful rpms are saved as job artifacts.
### 2. docker-build-rpms
In this stage we set up a docker image with nginx, the httpg patch and the `ngx_http_voms_module`. To do this, we use a set of scripts in the [`helper-scripts`](https://baltig.infn.it/mw-devel/helper-scripts.git) project.
The dockerfile and all the files needed for its compilation are in the `docker` directory. The image starts from AlmaLinux 9, defines a user and installs a set of useful packages. After that we import the nginx repo file, in this way we can download a lot of packages provided by nginx, including its last stable version. In the end we install the rpm packages that we build in the previous stage and the njs module.
### 3. push-to-dockerhub
In this last stage we push on dockerhub the image that we have builded in the previous stage. Note that this stage is run only when we push something in the master branch.
#!/usr/bin/env bash
# Copyright 2018-2023 Istituto Nazionale di Fisica Nucleare
# SPDX-License-Identifier: EUPL-1.2
downloadNginx() {
# check if ~/rpmbuild exists and is not empty
if [ -d ${HOME}/rpmbuild ] && [ "$(ls -A ${HOME}/rpmbuild)" ]; then
>&2 echo "Error: ${HOME}/rpmbuild already exists and is not empty"
return 1
fi
# set nginx version
if [ -z ${ngxVersion} ]; then
ngxVersion=1.26.2-1
fi
elVersion=$(rpmbuild --eval %{rhel})
echo "Downloading nginx version ${ngxVersion} (EL${elVersion})"
src_package_name="nginx-${ngxVersion}.el${elVersion}.ngx.src.rpm"
src_package_url="https://nginx.org/packages/centos/${elVersion}/SRPMS/${src_package_name}"
wget ${src_package_url}
rpm -i ${src_package_name}
}
buildHttpgNginxRPM() {
if ! printenv CI_PROJECT_DIR > /dev/null; then
>&2 echo "CI_PROJECT_DIR is not set in the environment, assuming the current working directory '${PWD}'"
export CI_PROJECT_DIR="${PWD}"
fi
sh ${CI_PROJECT_DIR}/rpm/addPatchToNginxSpec.sh
# build rpm
rpmlint ~/rpmbuild/SPECS/nginx.spec
rpmbuild -ba ~/rpmbuild/SPECS/nginx.spec
}
buildVomsModuleRPM() {
if [ -z ${CI_PROJECT_DIR} ]; then
CI_PROJECT_DIR="/workspaces/ngx_http_voms_module"
fi
# set voms modules sources
cd ~/rpmbuild/SOURCES/
mkdir ngx-http-voms-module
cp ${CI_PROJECT_DIR}/config ngx-http-voms-module/
cp ${CI_PROJECT_DIR}/config.make ngx-http-voms-module/
cp -r ${CI_PROJECT_DIR}/src ngx-http-voms-module/
cp ${CI_PROJECT_DIR}/rpm/nginx-module-http-voms.spec ~/rpmbuild/SPECS
# build rpm
rpmlint ~/rpmbuild/SPECS/nginx-module-http-voms.spec
rpmbuild -ba ~/rpmbuild/SPECS/nginx-module-http-voms.spec
cd ${CI_PROJECT_DIR}
}
// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.159.0/containers/cpp
{
"name": "C++",
"build": {
"dockerfile": "Dockerfile",
},
"runArgs": [
"--cap-add=SYS_PTRACE",
"--security-opt",
"seccomp=unconfined"
],
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.defaultProfile.linux": "bash"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-vscode.cpptools",
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
//"postCreateCommand": "sudo debuginfo-install -y voms",
// Comment out this line to run as root instead.
"remoteUser": "vscode",
"remoteEnv": {"NGX_HTTP_VOMS_MODULE_ROOT": "${containerWorkspaceFolder}"}
}
\ No newline at end of file
#!/usr/bin/env bash
# Copyright 2018-2023 Istituto Nazionale di Fisica Nucleare
# SPDX-License-Identifier: EUPL-1.2
set -ex
dnf install -y epel-release
dnf update -y
dnf install -y --setopt=tsflags=nodocs \
which \
wget \
sudo \
file \
git \
gcc-c++ \
gd-devel \
gettext \
ccache \
libxslt-devel \
lcov \
perl-ExtUtils-Embed \
perl-Digest-SHA \
readline-devel \
boost-devel \
voms-devel \
make \
patch \
zlib-devel \
pcre2-devel \
rpmdevtools \
rpmlint \
cpan \
voms-clients-cpp
\ No newline at end of file
#!/usr/bin/env bash
# Copyright 2018-2023 Istituto Nazionale di Fisica Nucleare
# SPDX-License-Identifier: EUPL-1.2
USERNAME=${1}
USER_UID=${2}
USER_GID=${3}
set -e
if [ "$(id -u)" -ne 0 ]; then
echo -e 'Script must be run as root. Use sudo, su, or add "USER root" to your Dockerfile before running this script.'
exit 1
fi
groupadd --gid $USER_GID $USERNAME
useradd --uid $USER_UID --gid $USER_GID -m $USERNAME
echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME
chmod 0440 /etc/sudoers.d/$USERNAME
CODESPACES_BASH="$(cat \
<<'EOF'
# Codespaces bash prompt theme
__bash_prompt() {
local userpart='`export XIT=$? \
&& [ ! -z "${GITHUB_USER}" ] && echo -n "\[\033[0;32m\]@${GITHUB_USER} " || echo -n "\[\033[0;32m\]\u " \
&& [ "$XIT" -ne "0" ] && echo -n "\[\033[1;31m\]➜" || echo -n "\[\033[0m\]➜"`'
local gitbranch='`\
export BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null); \
if [ "${BRANCH}" = "HEAD" ]; then \
export BRANCH=$(git describe --contains --all HEAD 2>/dev/null); \
fi; \
if [ "${BRANCH}" != "" ]; then \
echo -n "\[\033[0;36m\](\[\033[1;31m\]${BRANCH}" \
&& if git ls-files --error-unmatch -m --directory --no-empty-directory -o --exclude-standard ":/*" > /dev/null 2>&1; then \
echo -n " \[\033[1;33m\]✗"; \
fi \
&& echo -n "\[\033[0;36m\]) "; \
fi`'
local lightblue='\[\033[1;34m\]'
local removecolor='\[\033[0m\]'
PS1="${userpart} ${lightblue}\w ${gitbranch}${removecolor}\$ "
unset -f __bash_prompt
}
__bash_prompt
EOF
)"
USER_RC_PATH="/home/${USERNAME}"
echo "${CODESPACES_BASH}" >> "${USER_RC_PATH}/.bashrc"
chown ${USERNAME}:${USER_GID} "${USER_RC_PATH}/.bashrc"
echo "Done!"
.vscode
servroot*
nginx
\ No newline at end of file
nginx
docker/artifacts
node_modules
t/*
!t/README.md
!t/*.t
!t/setup.sh
!t/conf.d
!t/proxies.d
!t/openssl.conf
!t/socket.js
image: storm2/ngx-voms-build:latest
stages:
- build
- test
- docker-build
- docker-push
- deploy
build-no-debug:
build-rpms-el8:
stage: build
script:
- env
- sh ${HOME}/build-install-ngx-voms.sh
- mv ${HOME}/local/openresty openresty && rm openresty/nginx/sbin/nginx.old && tar cvzf openresty-no-debug.tar.gz openresty
image: almalinux:8
script:
- env | sort
- dnf -y install epel-release
- dnf install -y wget openssl-devel zlib-devel pcre2-devel make rpmdevtools rpmlint boost-devel voms-devel gcc-c++
- source .devcontainer/assets/build-library.sh
- CI_PROJECT_DIR=$PWD
- downloadNginx
- buildHttpgNginxRPM
- buildVomsModuleRPM
- cd ${CI_PROJECT_DIR}/docker && mkdir artifacts
- cp ~/rpmbuild/SRPMS/* artifacts/
- cp ~/rpmbuild/RPMS/x86_64/* artifacts/
artifacts:
paths:
- openresty-no-debug.tar.gz
- docker/artifacts/
build4c:
build-rpms-el9:
stage: build
script:
- env
- sh ${HOME}/build-install-ngx-voms.sh -d -c
- mv ${HOME}/local local
- mv ${HOME}/openresty-1.13.6.1/build/nginx-1.13.6 nginx-1.13.6
- tar cvzf artifacts.tar.gz local nginx-1.13.6
artifacts:
paths:
- artifacts.tar.gz
test4c:
stage: test
dependencies:
- build4c
script:
- rm -rf ${HOME}/local/
- rm -rf ${HOME}/openresty-1.13.6.1/build/nginx-1.13.6/
- tar xvzf artifacts.tar.gz
- mv local ${HOME}
- mv nginx-1.13.6 ${HOME}/openresty-1.13.6.1/build/
- sh test-ngx-voms.sh
- sh cov-ngx-voms.sh
- mv /tmp/coverage-report coverage
artifacts:
paths:
- coverage
pages:
stage: deploy
image: docker:latest
dependencies:
- test4c
script:
- mv coverage/ public/
image: almalinux:9
script:
- env | sort
- dnf -y install epel-release
- dnf install -y wget openssl-devel zlib-devel pcre2-devel make rpmdevtools rpmlint boost-devel voms-devel gcc-c++
- source .devcontainer/assets/build-library.sh
- CI_PROJECT_DIR=$PWD
- downloadNginx
- buildHttpgNginxRPM
- buildVomsModuleRPM
- cd ${CI_PROJECT_DIR}/docker && mkdir artifacts
- cp ~/rpmbuild/SRPMS/* artifacts/
- cp ~/rpmbuild/RPMS/x86_64/* artifacts/
artifacts:
paths:
- public
expire_in: 30 days
- docker/artifacts/
docker-build:
docker-build-rpms:
stage: docker-build
image: docker:latest
services:
- docker:dind
- name: docker:dind
command: ["--tls=false"]
dependencies:
- build-no-debug
- build-rpms-el8
- build-rpms-el9
script:
- cp openresty-no-debug.tar.gz ${CI_PROJECT_DIR}/docker/openresty.tar.gz && cd ${CI_PROJECT_DIR}/docker && sh build-image.sh
- docker tag storm2/ngx-voms:latest ${CI_REGISTRY_IMAGE}/ngx-voms:${CI_COMMIT_SHA:0:8}
- apk add git bash
- git clone https://baltig.infn.it/mw-devel/helper-scripts.git helper-scripts
- cp helper-scripts/scripts/* /usr/local/bin
- rm ${CI_PROJECT_DIR}/docker/artifacts/*-debuginfo*.rpm
- docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY}
- docker push ${CI_REGISTRY_IMAGE}/ngx-voms:${CI_COMMIT_SHA:0:8}
- export DOCKER_REGISTRY_HOST=${CI_REGISTRY}
- export DOCKER_REGISTRY_NAMESPACE=${CI_PROJECT_PATH}
- cd docker && build-docker-image.sh && push-docker-image.sh
dockerhub-push:
push-to-dockerhub:
stage: docker-push
image: docker:latest
services:
- docker:dind
- name: docker:dind
command: ["--tls=false"]
dependencies:
- docker-build
- docker-build-rpms
script:
- apk add git bash
- git clone https://baltig.infn.it/mw-devel/helper-scripts.git helper-scripts
- cp helper-scripts/scripts/* /usr/local/bin
- export DOCKER_PUSH_TO_DOCKERHUB=y
- env | sort
- docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY}
- docker pull ${CI_REGISTRY_IMAGE}/ngx-voms:${CI_COMMIT_SHA:0:8}
- docker tag ${CI_REGISTRY_IMAGE}/ngx-voms:${CI_COMMIT_SHA:0:8} storm2/ngx-voms:${CI_COMMIT_SHA:0:8}
- docker tag ${CI_REGISTRY_IMAGE}/ngx-voms:${CI_COMMIT_SHA:0:8} storm2/ngx-voms:latest
- docker login -u ${DOCKERHUB_USER} -p ${DOCKERHUB_PASSWORD}
- docker push storm2/ngx-voms:${CI_COMMIT_SHA:0:8}
- docker push storm2/ngx-voms:latest
- export DOCKER_REGISTRY_HOST=${CI_REGISTRY}
- export DOCKER_REGISTRY_NAMESPACE=${CI_PROJECT_PATH}
- cd docker && pull-docker-image.sh && unset DOCKER_REGISTRY_HOST
- docker login -u ${DOCKERHUB_USER} -p ${DOCKERHUB_PASSWORD} && push-docker-image.sh
only:
- master
# ngx_http_voms_module
# `ngx_http_voms_module`
[![pipeline status](https://baltig.infn.it/storm2/ngx_http_voms_module/badges/master/pipeline.svg)](https://baltig.infn.it/storm2/ngx_http_voms_module/commits/master)
[![pipeline status](https://baltig.infn.it/cnafsd/ngx_http_voms_module/badges/master/pipeline.svg)](https://baltig.infn.it/cnafsd/ngx_http_voms_module/commits/master)
## Description
_ngx_http_voms_module_ is a module for the [Nginx web server](https://www.nginx.org/) that enables client-side authentication based on X.509 proxy certificates augmented with VOMS Attribute Certificates, typically obtained from a [Virtual Organization Membership Service](https://italiangrid.github.io/voms/) (VOMS) server.
`ngx_http_voms_module` is a module for the [Nginx web server](https://www.nginx.org/) that enables client-side authentication based on X.509 proxy certificates augmented with VOMS Attribute Certificates, typically obtained from a [Virtual Organization Membership Service](https://italiangrid.github.io/voms/) (VOMS) server.
The module defines a set of [_embedded_ variables](~embedded-variables), whose values are extracted from the first Attribute Certificate found in the certificate chain.
The module defines a set of *embedded* variables, whose values are extracted from the first Attribute Certificate found in the certificate chain.
## Installation
The generic installation instructions are:
$ cd nginx-1.x.y
$ ./configure --add-module=/path/to/ngx_http_voms_module
$ make && make install
A Docker image is available for use in the context of the StoRM2 project, where the OpenResty distribution is used:
$ docker run --rm -it -v /path/to/ngx_http_voms_module:/home/build/ngx_http_voms_module storm2/ngx-voms-build
% cd openresty-1.x.y
% ./configure ${RESTY_CONFIG_OPTIONS} --add-module=../ngx_http_voms_module
% make && make install
## Embedded Variables
### Embedded Variables
The module makes the following embedded variables available for use in an Nginx configuration file:
### voms_user
#### voms_user
The Subject of the End-Entity certificate, used to sign the proxy.
_Example_: ``/C=IT/O=IGI/CN=test0``
### ssl_client_ee_s_dn
#### ssl_client_ee_s_dn
Like `voms_user`, the Subject of the End-Entity certificate. Unlike `voms_user`, it is available even for non-VOMS proxies and is formatted according to RFC 2253.
_Example_: ``CN=test0,O=IGI,C=IT``
_Example_: `CN=test0,O=IGI,C=IT`
### voms_user_ca
#### voms_user_ca
The Issuer (Certificate Authority) of the End-Entity certificate.
_Example_: ``/C=IT/O=IGI/CN=Test CA``
_Example_: `/C=IT/O=IGI/CN=Test CA`
### ssl_client_ee_i_dn
#### ssl_client_ee_i_dn
Like `voms_user_ca`, the Issuer of the End-Entity certificate. Unlike `voms_user_ca`, it is available even for non-VOMS proxies and is formatted according to RFC 2253.
_Example_: ``CN=Test CA,O=IGI,C=IT``
_Example_: `CN=Test CA,O=IGI,C=IT`
### voms_fqans
#### voms_fqans
A comma-separated list of Fully Qualified Attribute Names. See [The VOMS Attribute Certificate Format](http://ogf.org/documents/GFD.182.pdf) for more details.
_Example_: ``/test/exp1,/test/exp2,/test/exp3/Role=PIPPO``
_Example_: `/test.vo/exp1,/test.vo/exp2,/test.vo/exp3/Role=PIPPO`
### voms_server
#### voms_server
The Subject of the VOMS server certificate, used to sign the Attribute Certificate.
_Example_: ``/C=IT/O=IGI/CN=voms.example``
_Example_: `/C=IT/O=IGI/CN=voms.example`
### voms_server_ca
#### voms_server_ca
The Issuer (Certificate Authority) of the VOMS server certificate.
_Example_: ``/C=IT/O=IGI/CN=Test CA``
_Example_: `/C=IT/O=IGI/CN=Test CA`
### voms_vo
#### voms_vo
The name of the Virtual Organization (VO) to which the End Entity belongs.
_Example_: ``test.vo``
_Example_: `test.vo`
### voms_server_uri
#### voms_server_uri
The hostname and port of the VOMS network service that issued the Attribute Certificate, in the form _hostname_ :_port_.
_Example_: ``voms.example:15000``
_Example_: `voms.example:15000`
### voms_not_before
#### voms_not_before
The date before which the Attribute Certificate is not yet valid, in the form _YYYYMMDDhhmmss_ ``Z``.
The date before which the Attribute Certificate is not yet valid, in the form _YYYYMMDDhhmmss_ `Z`.
_Example_: ``20180101000000Z``
_Example_: `20180101000000Z`
### voms_not_after
#### voms_not_after
The date after which the Attribute Certificate is not valid anymore, in the form _YYYYMMDDhhmmss_ ``Z``.
The date after which the Attribute Certificate is not valid anymore, in the form _YYYYMMDDhhmmss_ `Z`.
_Example_: ``20180101120000Z``
_Example_: `20180101120000Z`
### voms_generic_attributes
#### voms_generic_attributes
A comma-separated list of attributes, each defined by three properties and formatted as ``n=``_name_ ``v=``_value_ ``q=``_qualifier_. The qualifier typically coincides with the name of the VO.
A comma-separated list of attributes, each defined by three properties and formatted as `n=`_name_ `v=`_value_ `q=`_qualifier_. The qualifier typically coincides with the name of the VO.
_Example_: ``n=nickname v=newland q=test.vo,n=nickname v=giaco q=test.vo``
_Example_: `n=nickname v=newland q=test.vo,n=nickname v=giaco q=test.vo`
### voms_serial
#### voms_serial
The serial number of the Attribute Certificate in hexadecimal format.
_Example_: ``7B``
_Example_: `7B`
## Installation
### prerequisites
The software dependecies are listed in the [provide-deps](docker/library-scripts/provide-deps.sh) script in the `docker` directory.
The nginx source files are also needed. To download them in ```/tmp/nginx-x.y.z``` you can execute:
```shell
$ ngxVersion=<version>
$ wget -O /tmp/nginx-$ngxVersion.tar.gz https://nginx.org/download/nginx-$ngxVersion.tar.gz
$ cd /tmp && tar -xzvf /tmp/nginx-$ngxVersion.tar.gz && cd -
```
### Generic installation
The generic installation instructions are:
```shell
$ cd /tmp/nginx-x.y.z
$ ./configure --add-module=/path/to/ngx_http_voms_module
$ make && make install
```
### Docker container
A Docker image with nginx and the `ngx_http_voms_module` is available, you can find it [here](https://hub.docker.com/r/cnafsd/nginx-httpg-voms).
### For the developers
A [.devcontainer](.devcontainer) is provided for the developers with all the instructions on how to use it, how to build the rpm module and how to install it.
## Testing
Setup and files to test the *ngx\_http\_voms\_module* are contained in the `t` folder.
Setup and files to test the `ngx_http_voms_module` are contained in the [`t`](t) folder.
ngx_addon_name=ngx_http_voms_module
ngx_module_type=HTTP
ngx_addon_name=voms
ngx_module_name=ngx_http_voms_module
ngx_module_srcs="$ngx_addon_dir/src/ngx_http_voms_module.cpp"
ngx_module_libs="-lvomsapi -lstdc++"
......
echo "objs/addon/src/ngx_http_voms_module.o: CFLAGS += -Werror" >> $NGX_MAKEFILE
#!/bin/sh
# Copyright 2018 Istituto Nazionale di Fisica Nucleare
#
# Licensed under the EUPL, Version 1.2 or - as soon they will be approved by the
# European Commission - subsequent versions of the EUPL (the "Licence"). You may
# not use this work except in compliance with the Licence. You may obtain a copy
# of the Licence at:
#
# https://joinup.ec.europa.eu/software/page/eupl
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the Licence is distributed on an "AS IS" basis, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# Licence for the specific language governing permissions and limitations under
# the Licence.
# This script builds in debug mode and installs openresty together with the
# ngx_http_voms_module.
#
# The script requires the locations of the openresty bundle and of the
# ngx_http_voms_module code (for example as checked-out from git). The locations
# are expressed by the environment variables OPENRESTY_ROOT and
# NGX_HTTP_VOMS_MODULE_ROOT respectively, if available. If they are not set,
# they are guessed:
# * a unique openresty bundle is looked for in ${HOME}
# * the ngx_http_voms_module code is looked for in the working directory of the
# continuous integration environment first and then in ${HOME}
#
# The script works best (i.e. it is tested) if run within a docker container
# started from the storm2/ngx-voms-build image.
#geninfo --base-directory ${HOME}/openresty-1.13.6.1/build/nginx-1.13.6/objs/addon/src/ --output-filename coverage.info ${HOME}/openresty-1.13.6.1/build/nginx-1.13.6/objs/addon/src/
geninfo --output-filename /tmp/coverage.info ${HOME}/openresty-1.13.6.1/build/nginx-1.13.6/objs/addon/src/
genhtml --prefix ${HOME}/openresty-1.13.6.1/build/nginx-1.13.6/objs/addon/src/ --ignore-errors source --demangle-cpp /tmp/coverage.info \
--legend --title "coverage ngix" --output-directory=/tmp/coverage-report
exit_status=$?
if [ ! $exit_status -eq 0 ]; then
echo "check output"
fi
echo $exit_status
DOCKER_IMAGE=cnafsd/nginx-httpg-voms
DOCKER_VERBOSE=y
DOCKER_GIT_TAG_ENABLED=y
FROM storm2/base:latest
# Copyright 2018-2023 Istituto Nazionale di Fisica Nucleare
# SPDX-License-Identifier: EUPL-1.2
RUN sudo yum -y install voms zlib pcre readline gettext && \
sudo yum clean all && rm -rf /var/cache/yum && \
mkdir -p /etc/nginx/conf.d && \
mkdir -p /home/build/local && \
chown -R build:build /etc/nginx/conf.d /home/build/local
ARG EL_VERSION=9
USER build
ADD openresty.tar.gz /home/build/local
FROM almalinux:${EL_VERSION}
RUN ls -lR /home/build && sudo chown -R build:build /home/build
# https://docs.docker.com/reference/dockerfile/#understand-how-arg-and-from-interact
ARG EL_VERSION
RUN \
touch /home/build/local/openresty/nginx/logs/access.log && \
touch /home/build/local/openresty/nginx/logs/error.log && \
ln -sf /dev/stdout /home/build/local/openresty/nginx/logs/access.log && \
ln -sf /dev/stderr /home/build/local/openresty/nginx/logs/error.log
# Allow customization of nginx user ID and name
ARG USERNAME=nginx
ARG USER_UID=1000
ARG USER_GID=${USER_UID}
COPY assets/nginx.conf /home/build/local/openresty/nginx/conf/nginx.conf
COPY assets/srm.conf /etc/nginx/conf.d/
# install dependencies
COPY library-scripts/*.sh /tmp/library-scripts/
RUN dnf update -y && \
sh /tmp/library-scripts/provide-deps.sh && \
sh /tmp/library-scripts/provide-user.sh ${USERNAME} ${USER_UID} ${USER_GID} && \
mkdir /pkgs && \
dnf clean all && rm -rf /var/cache/dnf
USER root
COPY artifacts/*.rpm /pkgs/
# Embed TINI since compose v3 syntax do not support the init
# option to run docker --init
#
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
# install nginx httpg + voms and njs dynamic modules (latest version)
COPY nginx.repo /etc/yum.repos.d/nginx.repo
RUN rpm -ivh /pkgs/nginx-*.el${EL_VERSION}.httpg.x86_64.rpm && \
rpm -ivh /pkgs/nginx-module-http-voms-*.el${EL_VERSION}.x86_64.rpm && \
dnf install -y nginx-module-njs \
# forward request and error logs to docker log collector
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
CMD ["/home/build/local/openresty/bin/openresty", "-g", "daemon off;"]
# install nginx + voms and njs dynamic modules
# RUN dnf -y install nginx nginx-module-njs && \
# rpm -ivh /pkgs/nginx-module-http-voms-1.24.0-1.el${EL_VERSION}.x86_64.rpm
EXPOSE 80
STOPSIGNAL SIGQUIT
CMD ["nginx", "-g", "daemon off;"]
This folder contains a Dockerfile to run an instance of Openresty/NGINX
compiled and linked against the ngx_voms_http_module.
For more details see the [Dockerfile](./Dockerfile)
The default configuration for NGINX is provided in [this conf file](
./assets/nginx.conf).
A configuration for the `/srm` endpoint useful for the storm docker compose
file is provided in [this conf file](./assets/srm.conf).
user build;
worker_processes 1;
env X509_VOMS_DIR=/vomsdir;
error_log logs/error.log warn;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format storm '$time_iso8601 [$request_id] $remote_addr - $remote_user "$request" <$upstream_response_time> '
'$ssl_protocol/$ssl_cipher '
'"$ssl_client_s_dn" '
'[$voms_fqans] '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log storm;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
server {
error_log logs/error.log debug;
access_log logs/access.log storm;
listen 443 ssl;
server_name storm.example;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /certs/cert.pem;
ssl_certificate_key /certs/key.pem;
ssl_client_certificate /etc/pki/tls/certs/ca-bundle.crt;
ssl_verify_client optional;
ssl_verify_depth 100;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location /srm {
proxy_pass http://fe:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# Simple tracing via request_id
proxy_set_header X-Request-Id $request_id;
# VOMS headers
proxy_set_header x-ssl_client_ee_s_dn $ssl_client_ee_s_dn;
proxy_set_header x-ssl_client_ee_i_dn $ssl_client_ee_i_dn;
proxy_set_header x-voms_fqans $voms_fqans;
proxy_set_header x-voms_user $voms_user;
proxy_set_header x-voms_user_ca $voms_user_ca;
proxy_set_header x-voms_vo $voms_vo;
proxy_set_header x-voms_not_before $voms_not_before;
proxy_set_header x-voms_not_after $voms_not_after;
proxy_set_header x-voms_generic_attributes $voms_generic_attributes;
proxy_set_header x-voms_serial $voms_serial;
}
}
#!/bin/bash
set -e
NGINX_VOMS_IMAGE=${NGINX_VOMS_IMAGE:-storm2/ngx-voms:latest}
docker build -t ${NGINX_VOMS_IMAGE} .
#!/usr/bin/env bash
# Copyright 2018-2023 Istituto Nazionale di Fisica Nucleare
# SPDX-License-Identifier: EUPL-1.2
set -ex
dnf -y install epel-release wget
# https://openresty.org/en/linux-packages.html#centos
wget https://openresty.org/package/centos/openresty2.repo
mv openresty2.repo /etc/yum.repos.d/openresty.repo
dnf config-manager --set-enabled crb
dnf -y install \
hostname \
which \
tar \
sudo \
file \
readline \
gettext \
less \
openssl \
zlib-devel \
pcre2-devel \
boost-devel \
voms-devel \
patch \
gcc-c++ \
rpmdevtools \
rpmlint \
perl-ExtUtils-Embed \
perl-Test-Nginx \
perl-Digest-SHA \
cpan \
voms-clients-cpp \
procps-ng
#!/usr/bin/env bash
# Copyright 2018-2023 Istituto Nazionale di Fisica Nucleare
# SPDX-License-Identifier: EUPL-1.2
USERNAME=${1}
USER_UID=${2}
USER_GID=${3}
set -ex
groupadd --gid $USER_GID $USERNAME
useradd --uid $USER_UID --gid $USER_GID -m $USERNAME
echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME
chmod 0440 /etc/sudoers.d/$USERNAME