Update to 2021-12-01 19:13

This commit is contained in:
Daniel Berteaud 2021-12-01 19:13:34 +01:00
commit 4c4556c660
2153 changed files with 60999 additions and 0 deletions

178
README.md Normal file
View File

@ -0,0 +1,178 @@
# Ansible roles
I use Ansible. And I use it **a lot**. Like, there's now nearly nothing I deploy manually, without it. As such I've written a lot of roles, to deploy and manage various applications. This include :
* Basic system configuration
* Authentication (eg, configure LDAP auth, or join an AD domain automatically)
* Plumber layers (like deploy a MySQL server, a PHP stack etc.)
* Authentication services (Samba4 in AD DC mode, Lemonldap::NG etc.)
* Collaborative apps (like Zimbra, Matrix, Etherpad, Seafile, OnlyOffice, Jitsi etc.)
* Monitoring tools (deploy Zabbix agent, proxy and server, Fusion Inventory agent, Graylog server)
* Web applications (GLPI, Ampache, Kanboard, Wordpress, Dolibarr, Matomo, Framadate, Dokuwiki etc.)
* Dev tools (Gitea)
* Security tools (OpenXPKI, Vaultwarden, manage SSH keys etc.)
* A lot more :-)
Most of my roles are RHEL centric (tested on AlmaLinux now that CentOS Linux is dead), and are made to be deployed on AlmaLinux 8 servers. Basic roles (like basic system configuration, postfix etc.) also support Debian/Ubuntu systems, but are less tested.
My roles are often dependent on other roles. For example, if you deploy glpi, it'll first pull all the required web and PHP stack.
Most of the web application roles are made to run behind a reverse proxy. You can use for this the nginx (recommended) or the httpd_front role.
## how to use this
Here're the steps to make use of this. Note that this is not a complete ansible how-to, just a quick guide to use my roles. For example, it'll not explain how to make use of ansible-vault to protect sensitive informations.
* Clone the repo
```
git clone https://git.lapiole.org/fws/ansible-roles.git
cd ansible-roles
```
* Create a few directories
```
mkdir {inventories,host_vars,group_vars,ssh,config}
```
* Create your SSH key. It's advised to set a passphrase to protect it
```
ssh-keygen -t rsa -b 4096 -f ssh/id_rsa
```
* Create the ansible user account on the hosts you want to manage. This can be done manually or can be automated with tools like kickstart (you can have a look at https://ks.lapiole.org/alma8.ks for example). The ansible user must have elevated privileges with sudo (so you have to ensure sudo is installed)
```
useradd -m ansible
mkdir ~ansible/.ssh
cat <<_EOF > ~ansible/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCwnPxF7vmJA8Jr7I2q6BNRxQIcnlFaA3O58x8532qXIox8fUdYJo0KkjpEl6pBSWGlF4ObTB04/Nks5rhv9Ew+EHO5GvavzVp5L3u8T+PP+idlLlwIERL2R632TBWVbxqvhtc813ozpaMRI7nCabgiIp8rFf4hqYJIn/RMpRdPSQaHrPHQpFEW9uHPbFYZ9+dywY88WXY+VJI1rkIU3NlOAw3GKjEd6iqiOboDl8Ld4qqc+NpqDFPeidYbk5xjKv3l/Y804tdwqO1UYC+psr983rs1Kq91jI/5xSjSQFM51W3HCpZMTzSIt4Swy+m+eqUIrInxMmw72HF2CL+PePHgmusMUBYPdBfqHIxEHEbvPuO67hLAhqH1dUDBp+0oiRSM/J/DX7K+I+jNO43/UtcvnrBjNjzAiiJEG3WRAcBAUpccOu3JHcRN5CLRB26yfLXpFRzUNCnajmdZF7qc0G5gJuy8KpUZ49VTmZmJ0Uzx1rZLaytSjHpf4e5X6F8iTQ1QmORxvCdfdsqoeod7jK384NXq+UD24Y/tEgq/eT7pl3yLCpQo4qKd/aCEBqc2bnLggVRr+WX94ojMdK35qYbdXtLsN5y6L20yde8tGtWY+nmbJzLnqVJ4TKxXKMl7q9Sdj1t7BrqQQIK3H9kP7SZRhWNP6tvNKBgKFgc/k01ldw== ansible@fws.fr
_EOF
chown -R ansible:ansible ~ansible/.ssh/
chmod 700 ~ansible/.ssh/
chmod 600 ~ansible/.ssh/authorized_keys
cat <<_EOF > /etc/sudoers.d/ansible
Defaults:ansible !requiretty
ansible ALL=(ALL) NOPASSWD: ALL
_EOF
chmod 600 /etc/sudoers.d/ansible
```
* Create your inventory file. For example, inventories/acme.ini
```
[infra]
db.acme.com
proxyin.acme.com
```
This will create a single group **infra** with two hosts in it.
* Create your main playbook. This is the file describing what to deploy on which host. You can store it at in the root dir, for example, acme.yml :
```
- name: Deploy common profiles
hosts: infra
roles:
- common
- backup
- name: Deploy databases servers
hosts: db.acme.com
roles:
- mysql_server
- postgresql_server
- name: Deploy reverse proxy
hosts: proxyin.acme.com
roles:
- nginx
- letsencrypt
- lemonldap_ng
```
It's pretty self-explanatory. First, roles **common** and **backup** will be deployed on every hosts in the infra group. Then, **mysql_server** and **postgresql_server** will be deployed on **db.acme.com**. And roles **nginx**, **letsencrypt** and **lemonldap_ng** will be deployed on host **proxyin.acme.com**
* Now, it's time to configure a few things. Configuration is done be assigning values to varibles, and can be done at several levels.
* group_vars/all/vars.yml : variables here will be inherited by every hosts
```
ansible_become: True
trusted_ip:
- 1.2.3.4
- 192.168.47.0/24
zabbix_ip:
- 10.11.12.13
system_admin_groups:
- 'admins'
system_admin_users:
- 'dani'
system_admin_email: servers@example.com
zabbix_agent_encryption: psk
zabbix_agent_servers: "{{ zabbix_ip }}"
zabbix_proxy_encryption: psk
zabbix_proxy_server: 'zabbix.example.com'
```
* group_vars/infra/vars.yml : variables here will be inherited by hosts in the **infra** group
```
sshd_src_ip: "{{ trusted_ip }}"
postfix_relay_host: '[smtp.example.com]:587'
postfix_relay_user: smtp
postfix_relay_pass: "S3cretP@ssw0rd"
ssh_users:
- name: ansible
ssh_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCwnPxF7vmJA8Jr7I2q6BNRxQIcnlFaA3O58x8532qXIox8fUdYJo0KkjpEl6pBSWGlF4ObTB04/Nks5rhv9Ew+EHO5GvavzVp5L3u8T+PP+idlLlwIERL2R632TBWVbxqvhtc813ozpaMRI7nCabgiIp8rFf4hqYJIn/RMpRdPSQaHrPHQpFEW9uHPbFYZ9+dywY88WXY+VJI1rkIU3NlOAw3GKjEd6iqiOboDl8Ld4qqc+NpqDFPeidYbk5xjKv3l/Y804tdwqO1UYC+psr983rs1Kq91jI/5xSjSQFM51W3HCpZMTzSIt4Swy+m+eqUIrInxMmw72HF2CL+PePHgmusMUBYPdBfqHIxEHEbvPuO67hLAhqH1dUDBp+0oiRSM/J/DX7K+I+jNO43/UtcvnrBjNjzAiiJEG3WRAcBAUpccOu3JHcRN5CLRB26yfLXpFRzUNCnajmdZF7qc0G5gJuy8KpUZ49VTmZmJ0Uzx1rZLaytSjHpf4e5X6F8iTQ1QmORxvCdfdsqoeod7jK384NXq+UD24Y/tEgq/eT7pl3yLCpQo4qKd/aCEBqc2bnLggVRr+WX94ojMdK35qYbdXtLsN5y6L20yde8tGtWY+nmbJzLnqVJ4TKxXKMl7q9Sdj1t7BrqQQIK3H9kP7SZRhWNP6tvNKBgKFgc/k01ldw== ansible@fws.fr
- name: dani
allow_forwarding: True
ssh_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCwnPxF7vmJA8Jr7I2q6BNRxQIcnlFaA3O58x8532qXIox8fUdYJo0KkjpEl6pBSWGlF4ObTB04/Nks5rhv9Ew+EHO5GvavzVp5L3u8T+PP+idlLlwIERL2R632TBWVbxqvhtc813ozpaMRI7nCabgiIp8rFf4hqYJIn/RMpRdPSQaHrPHQpFEW9uHPbFYZ9+
dywY88WXY+VJI1rkIU3NlOAw3GKjEd6iqiOboDl8Ld4qqc+NpqDFPeidYbk5xjKv3l/Y804tdwqO1UYC+psr983rs1Kq91jI/5xSjSQFM51W3HCpZMTzSIt4Swy+m+eqUIrInxMmw72HF2CL+PePHgmusMUBYPdBfqHIxEHEbvPuO67hLAhqH1dUDBp+0oiRSM/J/DX7K+I+jNO43/UtcvnrBjNjzAiiJEG3WRAcBAUpccOu3JHcRN5CLRB26yfLXpFRzUNCnajmdZF7qc0G5gJuy8KpUZ49VTmZmJ0Uzx1rZLaytSjHpf4e5X6F8iTQ1QmORxvCdfdsqoeod7jK384NXq+UD24Y/tEgq/eT7pl3yLCpQo4qKd/aCEBqc2bnLggVRr+WX94ojMdK35qYbdXtLsN5y6L20yde8tGtWY+nmbJzLnqVJ4TKxXKMl7q9Sdj1t7BrqQQIK3H9kP7SZRhWNP6tvNKBgKFgc/k01ldw== dani@fws.fr
# Default database server
mysql_server: db.acme.com
mysql_admin_pass: "r00tP@ss"
pg_server: db.acme.com
pg_admin_pass: "{{ mysql_admin_pass }}"
letsencrypt_challenge: dns
letsencrypt_dns_provider: gandi
letsencrypt_dns_provider_options: '--api-protocol=rest'
letsencrypt_dns_auth_token: "G7BL9RzkZdUI"
```
* host_vars/proxyin.acme.com/vars.yml : variables here will be inherited only by the host **proxyin.acme.com**
```
nginx_auto_letsencrypt_cert: True
# Default vhost settings
nginx_default_vhost_extra:
auth: llng
csp: >-
default-src 'self' 'unsafe-inline' blob:;
style-src-elem 'self' 'unsafe-inline' data:;
img-src 'self' data: blob: https://stats.fws.fr;
script-src 'self' 'unsafe-inline' 'unsafe-eval' https://stats.acme.com blob:;
font-src 'self' data:
proxy:
cache: True
backend: http://web1.acme.com
nginx_vhosts:
- name: mail-filter.example.com
proxy:
backend: https://10.64.2.10:8006
allowed_methods: [GET,HEAD,POST,PUT,DELETE]
src_ip: "{{ trusted_ip }}"
auth: False
- name: graphes.acme.com
proxy:
backend: http://10.64.3.15:3000
allowed_methods: [GET,HEAD,POST,PUT,DELETE]
```
## How to check available variables
Every role has default variables set in the defaults sub folder. You can have a look at it to see which variables are available and what default value they have.
## Contact
You can contact me at ansible AT lapiole DOT org if needed

15
ansible.cfg Normal file
View File

@ -0,0 +1,15 @@
[defaults]
remote_user = ansible
private_key_file = ssh/id_rsa
ansible_managed = Managed by ansible, manual modifications will be lost
ask_vault_pass = True
remote_tmp = /tmp/.ansible-${USER}/tmp
timeout = 30
[privilege_escalation]
become=True
[ssh_connection]
ssh_args = -F ssh/config
control_path = /tmp/ans-ssh-%%C
pipelining = True

1089
library/iptables_raw.py Normal file

File diff suppressed because it is too large Load Diff

9
playbooks/update_all.yml Normal file
View File

@ -0,0 +1,9 @@
---
- name: Update everything
hosts: '*'
tasks:
- yum: name='*' state=latest
when: ansible_os_family == 'RedHat'
- apt: name='*' state=latest
when: ansible_os_family == 'Debian'

View File

@ -0,0 +1,7 @@
---
- name: Update ca-certificates
hosts: '*'
tasks:
- name: Update ca-certificates
package: name=ca-certificates state=latest

View File

@ -0,0 +1,42 @@
---
- name: Update Zabbix
hosts: '*'
tasks:
- yum:
name:
- zabbix-agent
- zabbix-agent-addons
state: latest
when: ansible_os_family == 'RedHat'
notify: restart zabbix-agent
- apt:
name:
- zabbix-agent
update_cache: True
state: latest
when: ansible_os_family == 'Debian'
notify: restart zabbix-agent
- git:
repo: https://git.fws.fr/fws/zabbix-agent-addons.git
dest: /var/lib/zabbix/addons
register: zabbix_agent_addons_git
when: ansible_os_family == 'Debian'
notify: restart zabbix-agent
- shell: cp -af /var/lib/zabbix/addons/{{ item.src }}/* {{ item.dest }}/
with_items:
- { src: zabbix_conf, dest: /etc/zabbix/zabbix_agentd.conf.d }
- { src: zabbix_scripts, dest: /var/lib/zabbix/bin }
- { src: lib, dest: /usr/local/lib/site_perl }
when:
- zabbix_agent_addons_git.changed
- ansible_os_family == 'Debian'
- shell: chmod +x /var/lib/zabbix/bin/*
args:
warn: False
when:
- zabbix_agent_addons_git.changed
- ansible_os_family == 'Debian'
handlers:
- name: restart zabbix-agent
service: name=zabbix-agent state=restarted

View File

@ -0,0 +1,34 @@
# Akeneo PIM
[Akeneo PIM](https://www.akeneo.com/) A Product Information Management (PIM) solution is aimed to centralize all the marketing data
## Settings
Akeneo requires a few settings at the host level. Something like this
```
# This should be defined on the server which will host the database
# It's not mandatory to be on the same host as the PIM itself. But the important thing is that AKeneo PIM
# requires MySQL. It'll not work with MariaDB
mysql_engine: mysql
# Prevent an error when checking system requirements. Note that this is only for the CLI
# as web access will use it's own FPM pool
php_conf_memory_limit: 512M
# We need Elasticsearch 7. Same foir MySQL, it's not required to be on the same host
es_major_version: 7
# Define a vhost to expose the PIM. Note that this is a minimal example
# And you will most likely want to put a reverse proxy (look at the nginx role) in front of it
httpd_ansible_vhosts:
- name: pim.example.org
document_root: /opt/pim_1/app/public
```
## Installation
Installation should be fully automatic
## Upgrade
Major upgrades might require some manual steps, as detailed on https://docs.akeneo.com/5.0/migrate_pim/upgrade_major_version.html

View File

@ -0,0 +1,36 @@
---
# Version to deploy
pim_version: 5.0.43
# User under which the PIM will run
pim_user: php-pim_{{ pim_id }}
# If you install several pim instance on the same host, you should change the ID for each of them
pim_id: 1
# Root directory of the installation
pim_root_dir: /opt/pim_{{ pim_id }}
# Should anisble handle upgrades or just initial install
pim_manage_upgrade: True
# PHP version to use
pim_php_version: 74
# Database settings
pim_db_server: "{{ mysql_server | default('localhost') }}"
pim_db_port: 3306
pim_db_name: akeneopim_{{ pim_id }}
pim_db_user: akeneopim_{{ pim_id }}
# A random pass will be generated and stored in {{ pim_root_dir }}/meta/ansible_dbpass if not defined
# pim_db_pass: S3cr3t.
# A secret used to sign cookies. A random one will be generated and stored in {{ pim_root_dir }}/meta/ansible_secret if not defined
# pim_secret: ChangeMe
# Elasticsearch host
pim_es_server: localhost:9200
# Public URL used to reach AKeneo. Note that you'll have to define a vhost for Akeneo PIM to be reachable
pim_public_url: http://pim.{{ inventory_hostname }}/
# Define the initial admin password. If not defined, a random one will be generated ans stored under {{ pim_root_dir }}/meta/ansible_admin_pass
# Note that this is only used on initial install, and will be ignored for upgrades
# pim_admin_pass: p@ssw0rd

View File

@ -0,0 +1,7 @@
---
- name: restart akeneo-pim
service: name={{ item }} state=restarted
loop:
- akeneo-pim_{{ pim_id }}-jobs
- akeneo-pim_{{ pim_id }}-events-api

View File

@ -0,0 +1,12 @@
---
allow_duplicates: True
dependencies:
- role: mkdir
- role: composer
- role: mysql_server
when: pim_db_server in ['localhost','127.0.0.1']
- role: httpd_php
- role: nodejs
- role: elasticsearch
when: pim_es_server | regex_replace('(.*):\d+','\\1') in ['localhost','127.0.0.1']

View File

@ -0,0 +1,10 @@
---
- name: Compress previous version
command: tar cf {{ pim_root_dir }}/archives/{{ pim_current_version }}.tar.zst ./ --use-compress-program=zstd
args:
chdir: "{{ pim_root_dir }}/archives/{{ pim_current_version }}"
warn: False
environment:
ZSTD_CLEVEL: 10
tags: pim

View File

@ -0,0 +1,40 @@
---
- name: Create the archive dir
file: path={{ pim_root_dir }}/archives/{{ pim_current_version }} state=directory
tags: pim
- name: Stop jobs and event API services
service: name={{ item }} state=stopped
loop:
- akeneo-pim_{{ pim_id }}-jobs
- akeneo-pim_{{ pim_id }}-events-api
tags: pim
- name: Disable cron jobs
file: path=/etc/cron.d/akeneopim_{{ pim_id }} state=absent
tags: pim
- name: Archive current version
synchronize:
src: "{{ pim_root_dir }}/app"
dest: "{{ pim_root_dir }}/archives/{{ pim_current_version }}/"
compress: False
delete: True
delegate_to: "{{ inventory_hostname }}"
tags: pim
- name: Dump the database
mysql_db:
state: dump
name: "{{ pim_db_name }}"
target: "{{ pim_root_dir }}/archives/{{ pim_current_version }}/{{ pim_db_name }}.sql.xz"
login_host: "{{ pim_db_server }}"
login_port: "{{ pim_db_port }}"
login_user: "{{ pim_db_user }}"
login_password: "{{ pim_db_pass }}"
quick: True
single_transaction: True
environment:
XZ_OPT: -T0
tags: pim

View File

@ -0,0 +1,8 @@
---
- name: Remove tmp and obsolete files
file: path={{ item }} state=absent
loop:
- "{{ pim_root_dir }}/archives/{{ pim_current_version }}"
tags: pim

View File

@ -0,0 +1,117 @@
---
- name: Deploy configuration
template: src=env.j2 dest={{ pim_root_dir }}/app/.env.local group={{ pim_user }} mode=640
tags: pim
- import_tasks: ../includes/webapps_webconf.yml
vars:
- app_id: pim_{{ pim_id }}
- php_version: "{{ pim_php_version }}"
- php_fpm_pool: "{{ pim_php_fpm_pool | default('') }}"
tags: pim
- name: Build and update frontend components
command: scl enable php{{ pim_php_version }} -- make upgrade-front
args:
chdir: "{{ pim_root_dir }}/app"
environment:
NO_DOCKER: true
APP_ENV: prod
become_user: "{{ pim_user }}"
when: pim_install_mode != 'none'
tags: pim
- name: Initialize the database
command: scl enable php{{ pim_php_version }} -- make database O="--catalog vendor/akeneo/pim-community-dev/src/Akeneo/Platform/Bundle/InstallerBundle/Resources/fixtures/minimal"
args:
chdir: "{{ pim_root_dir }}/app"
environment:
NO_DOCKER: true
APP_ENV: prod
become_user: "{{ pim_user }}"
when: pim_install_mode == 'install'
tags: pim
- name: Upgrade database
command: /bin/php{{ pim_php_version }} {{ pim_root_dir }}/app/bin/console doctrine:migrations:migrate --no-interaction
args:
chdir: "{{ pim_root_dir }}/app"
become_user: "{{ pim_user }}"
when: pim_install_mode == 'upgrade'
tags: pim
- name: Deploy permission script
template: src=perms.sh.j2 dest={{ pim_root_dir }}/perms.sh mode=755
register: pim_perm_script
tags: pim
- name: Apply permissions
command: "{{ pim_root_dir }}/perms.sh"
when: pim_perm_script.changed or pim_install_mode != 'none'
tags: pim
- name: Setup cron jobs
cron:
cron_file: akeneopim_{{ pim_id }}
user: "{{ pim_user }}"
name: "{{ item.name }}"
job: /bin/php{{ pim_php_version }} {{ pim_root_dir }}/app/bin/console {{ item.job }}
minute: "{{ item.minute | default('*') }}"
hour: "{{ item.hour | default('*') }}"
weekday: "{{ item.weekday | default('*') }}"
day: "{{ item.day | default('*') }}"
month: "{{ item.month | default('*') }}"
loop:
- name: refresh
job: pim:versioning:refresh
minute: 30
hour: 1
- name: purge
job: pim:versioning:purge --more-than-days 90 --no-interaction --force
minute: 30
hour: 2
- name: update-data
job: akeneo:connectivity-audit:update-data
minute: 1
- name: purge-errors
job: akeneo:connectivity-connection:purge-error
minute: 10
- name: purge-job-execution
job: akeneo:batch:purge-job-execution
minute: 20
hour: 0
day: 1
- name: purge-error-count
job: akeneo:connectivity-audit:purge-error-count
minute: 40
hour: 0
- name: aggregate
job: pim:volume:aggregate
minute: 30
hour: 4
- name: schedule-periodic-tasks
job: pim:data-quality-insights:schedule-periodic-tasks
minute: 15
hour: 0
- name: prepare-evaluations
job: pim:data-quality-insights:prepare-evaluations
minute: '*/10'
- name: evaluations
job: pim:data-quality-insights:evaluations
minute: '*/30'
- name: purge-messages
job: akeneo:messenger:doctrine:purge-messages messenger_messages default
minute: 0
hour: '*/2'
tags: pim
- name: Create the admin user
command: /bin/php{{ pim_php_version }} {{ pim_root_dir }}/app/bin/console pim:user:create --admin -n -- admin {{ pim_admin_pass | quote }} admin@example.org Admin Admin fr_FR
when: pim_install_mode == 'install'
become_user: "{{ pim_user }}"
tags: pim
- name: Deploy logrotate conf
template: src=logrotate.conf.j2 dest=/etc/logrotate.d/akeneopim_{{ pim_id }}
tags: pim

View File

@ -0,0 +1,30 @@
---
- name: Create nedded directories
file: path={{ item.dir }} state=directory owner={{ item.owner | default(omit) }} group={{ item.group | default(omit) }} mode={{ item.mode | default(omit) }}
loop:
- dir: "{{ pim_root_dir }}/meta"
mode: 700
- dir: "{{ pim_root_dir }}/archives"
mode: 700
- dir: "{{ pim_root_dir }}/backup"
mode: 700
- dir: "{{ pim_root_dir }}/data"
owner: "{{ pim_user }}"
mode: 700
- dir: "{{ pim_root_dir }}/app"
owner: "{{ pim_user }}"
group: "{{ pim_user }}"
- dir: "{{ pim_root_dir }}/tmp"
owner: "{{ pim_user }}"
group: "{{ pim_user }}"
mode: 700
- dir: "{{ pim_root_dir }}/sessions"
owner: "{{ pim_user }}"
group: "{{ pim_user }}"
mode: 700
tags: pim
- name: Link the var directory to the data dir
file: src={{ pim_root_dir }}/data dest={{ pim_root_dir }}/app/var state=link
tags: pim

View File

@ -0,0 +1,38 @@
---
# Detect installed version (if any)
- block:
- import_tasks: ../includes/webapps_set_install_mode.yml
vars:
- root_dir: "{{ pim_root_dir }}"
- version: "{{ pim_version }}"
- set_fact: pim_install_mode={{ (install_mode == 'upgrade' and not pim_manage_upgrade) | ternary('none',install_mode) }}
- set_fact: pim_current_version={{ current_version | default('') }}
tags: pim
# Create a random pass for the DB if needed
- block:
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ pim_root_dir }}/meta/ansible_dbpass"
- set_fact: pim_db_pass={{ rand_pass }}
when: pim_db_pass is not defined
tags: pim
# Create a random secret if needed
- block:
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ pim_root_dir }}/meta/ansible_secret"
- set_fact: pim_secret={{ rand_pass }}
when: pim_secret is not defined
tags: pim
# Create a random admin pass if needed
- block:
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ pim_root_dir }}/meta/ansible_admin_pass"
- set_fact: pim_admin_pass={{ rand_pass }}
when: pim_admin_pass is not defined
tags: pim

View File

@ -0,0 +1,95 @@
---
- name: Install needed tools
package:
name:
- make
- ghostscript
- aspell
tags: pim
- when: pim_install_mode == 'upgrade'
block:
- name: Wipe install on upgrades
file: path={{ pim_root_dir }}/app state=absent
- name: Create app subdir
file: path={{ pim_root_dir }}/app state=directory owner={{ pim_user }} group={{ pim_user }}
- name: Link the var directory
file: src={{ pim_root_dir }}/data dest={{ pim_root_dir }}/app/var state=link
tags: pim
- when: pim_install_mode != 'none'
block:
- name: Deploy composer.json
template: src=composer.json.j2 dest={{ pim_root_dir }}/app/composer.json owner={{ pim_user }}
become_user: root
- name: Install Akeneo with Composer
composer:
working_dir: "{{ pim_root_dir }}/app"
executable: /bin/php{{ pim_php_version }}
command: install
become_user: "{{ pim_user }}"
- name: Install yarn globaly
npm:
name: yarn
path: "{{ pim_root_dir }}/app"
global: True
state: latest
- name: Install typescript globaly
npm:
name: typescript
path: "{{ pim_root_dir }}/app"
global: True
state: latest
tags: pim
# the PIM makefile has /usr/local/bin/composer hardcoded
- name: Link composer in /usr/local/bin
file: src=/bin/composer dest=/usr/local/bin/composer state=link
tags: pim
- import_tasks: ../includes/webapps_create_mysql_db.yml
vars:
- db_name: "{{ pim_db_name }}"
- db_user: "{{ pim_db_user }}"
- db_server: "{{ pim_db_server }}"
- db_pass: "{{ pim_db_pass }}"
tags: pim
- name: Set correct SELinux context
sefcontext:
target: "{{ pim_root_dir }}(/.*)?"
setype: httpd_sys_content_t
state: present
when: ansible_selinux.status == 'enabled'
tags: pim
- name: Install pre/post backup hooks
template: src={{ item }}-backup.j2 dest=/etc/backup/{{ item }}.d/pim_{{ pim_id }} mode=700
loop:
- pre
- post
tags: pim
- name: Install job consumer and events api service units
template: src={{ item.src }} dest=/etc/systemd/system/{{ item.dest }}
loop:
- src: akeneo-pim-jobs.service.j2
dest: akeneo-pim_{{ pim_id }}-jobs.service
- src: akeneo-pim-events-api.service.j2
dest: akeneo-pim_{{ pim_id }}-events-api.service
register: pim_job_unit
notify: restart akeneo-pim
tags: pim
- name: Reload systemd
systemd: daemon_reload=True
when: pim_job_unit.results | selectattr('changed','equalto',True) | list | length > 0
tags: pim

View File

@ -0,0 +1,13 @@
---
- include: user.yml
- include: directories.yml
- include: facts.yml
- include: archive_pre.yml
when: pim_install_mode == 'upgrade'
- include: install.yml
- include: conf.yml
- include: write_version.yml
- include: archive_post.yml
when: pim_install_mode == 'upgrade'
- include: cleanup.yml

View File

@ -0,0 +1,8 @@
---
- name: Start services
service: name={{ item }} state=started enabled=True
loop:
- akeneo-pim_{{ pim_id }}-jobs
- akeneo-pim_{{ pim_id }}-events-api
tags: pim

View File

@ -0,0 +1,9 @@
---
- name: Create user
user:
name: "{{ pim_user }}"
system: True
home: "{{ pim_root_dir }}"
shell: /sbin/nologin
tags: pim

View File

@ -0,0 +1,5 @@
---
- name: Write current installed version
copy: content={{ pim_version }} dest={{ pim_root_dir }}/meta/ansible_version
tags: pim

View File

@ -0,0 +1,22 @@
[Unit]
Description=Akeneo Events API worker for PIM {{ pim_id }}
[Service]
User={{ pim_user }}
Group={{ pim_user }}
WorkingDirectory={{ pim_root_dir }}/app
ExecStart=/bin/php{{ pim_php_version }} bin/console messenger:consume webhook --env=prod
PrivateTmp=yes
PrivateDevices=yes
ProtectSystem=full
ProtectHome=yes
NoNewPrivileges=yes
MemoryLimit=1024M
SyslogIdentifier=akeneo-pim_{{ pim_id }}-events-api
Restart=on-failure
StartLimitInterval=0
RestartSec=30
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,22 @@
[Unit]
Description=Akeneo jobs worker for PIM {{ pim_id }}
[Service]
User={{ pim_user }}
Group={{ pim_user }}
WorkingDirectory={{ pim_root_dir }}/app
ExecStart=/bin/php{{ pim_php_version }} bin/console akeneo:batch:job-queue-consumer-daemon --env=prod
PrivateTmp=yes
PrivateDevices=yes
ProtectSystem=full
ProtectHome=yes
NoNewPrivileges=yes
MemoryLimit=1024M
SyslogIdentifier=akeneo-pim_{{ pim_id }}-jobs
Restart=on-failure
StartLimitInterval=0
RestartSec=30
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,44 @@
{
"name": "akeneo/pim-community-standard",
"description": "The \"Akeneo Community Standard Edition\" distribution",
"license": "OSL-3.0",
"type": "project",
"authors": [
{
"name": "Akeneo",
"homepage": "http://www.akeneo.com"
}
],
"autoload": {
"psr-0": {
"": "src/"
},
"psr-4": {
"Pim\\Upgrade\\": "upgrades/"
},
"exclude-from-classmap": [
"vendor/akeneo/pim-community-dev/src/Kernel.php"
]
},
"require": {
"akeneo/pim-community-dev": "^{{ pim_version }}"
},
"require-dev": {
"doctrine/doctrine-migrations-bundle": "1.3.2",
"symfony/debug-bundle": "^4.4.7",
"symfony/web-profiler-bundle": "^4.4.7",
"symfony/web-server-bundle": "^4.4.7"
},
"scripts": {
"post-update-cmd": [
"bash vendor/akeneo/pim-community-dev/std-build/install-required-files.sh"
],
"post-install-cmd": [
"bash vendor/akeneo/pim-community-dev/std-build/install-required-files.sh"
],
"post-create-project-cmd": [
"bash vendor/akeneo/pim-community-dev/std-build/install-required-files.sh"
]
},
"minimum-stability": "stable"
}

View File

@ -0,0 +1,17 @@
APP_ENV=prod
APP_DEBUG=0
APP_DATABASE_HOST={{ pim_db_server }}
APP_DATABASE_PORT={{ pim_db_port }}
APP_DATABASE_NAME={{ pim_db_name }}
APP_DATABASE_USER={{ pim_db_user }}
APP_DATABASE_PASSWORD={{ pim_db_pass | quote }}
APP_DEFAULT_LOCALE=en
APP_SECRET={{ pim_secret | quote }}
APP_INDEX_HOSTS={{ pim_es_server }}
APP_PRODUCT_AND_PRODUCT_MODEL_INDEX_NAME=akeneo_pim_product_and_product_model
APP_CONNECTION_ERROR_INDEX_NAME=akeneo_connectivity_connection_error
MAILER_URL=null://localhost&sender_address=no-reply@{{ ansible_domain }}
AKENEO_PIM_URL={{ pim_public_url }}
LOGGING_LEVEL=NOTICE
APP_EVENTS_API_DEBUG_INDEX_NAME=akeneo_connectivity_connection_events_api_debug
APP_PRODUCT_AND_PRODUCT_MODEL_INDEX_NAME=akeneo_pim_product_and_product_model

View File

@ -0,0 +1,31 @@
<Directory {{ pim_root_dir }}/app/public>
AllowOverride All
Options FollowSymLinks
{% if pim_src_ip is defined and pim_src_ip | length > 0 %}
Require ip {{ pim_src_ip | join(' ') }}
{% else %}
Require all granted
{% endif %}
<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php-fpm/{{ pim_php_fpm_pool | default('pim_' + pim_id | string) }}.sock|fcgi://localhost"
</FilesMatch>
RewriteEngine On
# Handle Authorization Header
RewriteCond %{HTTP:Authorization} .
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
# Send Requests To Front Controller...
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [QSA,L]
<FilesMatch "(\.git.*)">
Require all denied
</FilesMatch>
</Directory>
<Directory {{ pim_root_dir }}/app/public/bundles>
RewriteEngine Off
</Directory>

View File

@ -0,0 +1,6 @@
{{ pim_root_dir }}/data/logs/*.log {
daily
rotate 90
compress
missingok
}

View File

@ -0,0 +1,11 @@
#!/bin/bash
restorecon -R {{ pim_root_dir }}
chown root:root {{ pim_root_dir }}
chmod 700 {{ pim_root_dir }}
setfacl -R -k -b {{ pim_root_dir }}
setfacl -m u:{{ pim_user | default('apache') }}:rx,u:{{ httpd_user | default('apache') }}:x {{ pim_root_dir }}
find {{ pim_root_dir }}/app -type f -exec chmod 644 "{}" \;
find {{ pim_root_dir }}/app -type d -exec chmod 755 "{}" \;
chown -R {{ pim_user }}:{{ pim_user }} {{ pim_root_dir }}/app

View File

@ -0,0 +1,35 @@
[pim_{{ pim_id }}]
listen.owner = root
listen.group = apache
listen.mode = 0660
listen = /run/php-fpm/pim_{{ pim_id }}.sock
user = {{ pim_user }}
group = {{ pim_user }}
catch_workers_output = yes
pm = dynamic
pm.max_children = 15
pm.start_servers = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 6
pm.max_requests = 5000
request_terminate_timeout = 5m
php_flag[display_errors] = off
php_admin_flag[log_errors] = on
php_admin_value[error_log] = syslog
php_admin_value[memory_limit] = 1024M
php_admin_value[session.save_path] = {{ pim_root_dir }}/sessions
php_admin_value[upload_tmp_dir] = {{ pim_root_dir }}/tmp
php_admin_value[sys_temp_dir] = {{ pim_root_dir }}/tmp
php_admin_value[post_max_size] = 200M
php_admin_value[upload_max_filesize] = 200M
php_admin_value[disable_functions] = system, show_source, symlink, exec, dl, shell_exec, passthru, phpinfo, escapeshellarg, escapeshellcmd
php_admin_value[open_basedir] = {{ pim_root_dir }}:/usr/share/pear/:/usr/share/php/
php_admin_value[max_execution_time] = 1200
php_admin_value[max_input_time] = 1200
php_admin_flag[allow_url_include] = off
php_admin_flag[allow_url_fopen] = off
php_admin_flag[file_uploads] = on
php_admin_flag[session.cookie_httponly] = on

View File

@ -0,0 +1,3 @@
#!/bin/bash -e
rm -f {{ pim_root_dir }}/backup/*.sql.zst

View File

@ -0,0 +1,14 @@
#!/bin/sh
set -eo pipefail
/usr/bin/mysqldump \
{% if pim_db_server not in ['localhost','127.0.0.1'] %}
--user={{ pim_db_user | quote }} \
--password={{ pim_db_pass | quote }} \
--host={{ pim_db_server | quote }} \
--port={{ pim_db_port | quote }} \
{% endif %}
--quick --single-transaction \
--add-drop-table {{ pim_db_name | quote }} | zstd -c > {{ pim_root_dir }}/backup/{{ pim_db_name }}.sql.zst

View File

@ -0,0 +1,95 @@
---
ampache_id: "1"
ampache_manage_upgrade: True
ampache_version: '5.1.1'
ampache_config_version: 58
ampache_zip_url: https://github.com/ampache/ampache/releases/download/{{ ampache_version }}/ampache-{{ ampache_version }}_all.zip
ampache_zip_sha1: a5347181297ab188fe95b3875f75b7838d581974
ampache_root_dir: /opt/ampache_{{ ampache_id }}
ampache_php_user: php-ampache_{{ ampache_id }}
ampache_php_version: 74
# If you prefer using a custom PHP FPM pool, set it's name.
# You might need to adjust ampache_php_user
# ampache_php_fpm_pool: php56
ampache_mysql_server: "{{ mysql_server | default('localhost') }}"
# ampache_mysql_port: 3306
ampache_mysql_db: ampache_{{ ampache_id }}
ampache_mysql_user: ampache_{{ ampache_id }}
# If not defined, a random pass will be generated and stored in the meta directory
# ampache_mysql_pass: ampache
# ampache_alias: ampache
# ampache_allowed_ip:
# - 192.168.7.0/24
# - 10.2.0.0/24
ampache_local_web_path: "http://ampache.{{ ansible_domain }}/"
ampache_auth_methods:
- mysql
ampache_ldap_url: "{{ ad_auth | default(False) | ternary('ldap://' + ad_realm | default(samba_realm) | lower,ldap_uri) }}"
ampache_ldap_starttls: True
ampache_ldap_search_dn: "{{ ad_auth | default(False) | ternary((ad_ldap_user_search_base is defined) | ternary(ad_ldap_user_search_base,'DC=' + ad_realm | default(samba_realm) | regex_replace('\\.',',DC=')), ldap_base) }}"
ampache_ldap_username: ""
ampache_ldap_password: ""
ampache_ldap_objectclass: "{{ ad_auth | default(False) | ternary('user','inetOrgPerson') }}"
ampache_ldap_filter: "{{ ad_auth | default(False) | ternary('(&(objectCategory=person)(objectClass=user)(primaryGroupId=513)(sAMAccountName=%v))','(uid=%v)') }}"
ampache_ldap_email_field: mail
ampache_ldap_name_field: cn
ampache_admin_users:
- admin
#ampache_logout_redirect: https://sso.domain.org
ampache_metadata_order: 'getID3,filename'
ampache_lastfm_api_key: 697bad201ee93391630d845c7b3f9610
ampache_lastfm_api_secret: 5f5fe59aa2f9c60220f04e94aa59c209
ampache_max_bit_rate: 192
ampache_min_bit_rate: 64
# allowed, required or false
ampache_transcode_m4a: required
ampache_transcode_flac: required
ampache_transcode_mpc: required
ampache_transcode_ogg: required
ampache_transcode_oga: required
ampache_transcode_wav: required
ampache_transcode_wma: required
ampache_transcode_aif: required
ampache_transcode_aiff: required
ampache_transcode_ape: required
ampache_transcode_shn: required
ampache_transcode_mp3: allowed
ampache_transcode_avi: required
ampache_transcode_mkv: required
ampache_transcode_mpg: required
ampache_transcode_mpeg: required
ampache_transcode_m4v: required
ampache_transcode_mp4: required
ampache_transcode_mov: required
ampache_transcode_wmv: required
ampache_transcode_ogv: required
ampache_transcode_divx: required
ampache_transcode_m2ts: required
ampache_transcode_webm: required
ampache_transcode_flv: allowed
ampache_transcode_player_api_mp3: required
ampache_encode_player_api_target: mp3
ampache_encode_player_webplayer: mp3
ampache_encode_target: mp3
ampache_encode_video_target: webm
# If defined, will be printed on the login page. HTML can be used, eg
# ampache_motd: '<a href="/sso.php">Use central authentication</a>'
...

View File

@ -0,0 +1,4 @@
---
- include: ../httpd_common/handlers/main.yml
- include: ../httpd_php/handlers/main.yml
...

View File

@ -0,0 +1,6 @@
---
allow_duplicates: true
dependencies:
- role: httpd_php
- role: repo_rpmfusion
...

View File

@ -0,0 +1,213 @@
---
- name: Install needed tools
yum:
name:
- unzip
- acl
- git
- ffmpeg
- mariadb
tags: ampache
- import_tasks: ../includes/create_system_user.yml
vars:
- user: "{{ ampache_php_user }}"
- comment: "PHP FPM for ampache {{ ampache_id }}"
tags: ampache
- import_tasks: ../includes/webapps_set_install_mode.yml
vars:
- root_dir: "{{ ampache_root_dir }}"
- version: "{{ ampache_version }}"
tags: ampache
- set_fact: ampache_install_mode={{ (install_mode == 'upgrade' and not ampache_manage_upgrade) | ternary('none',install_mode) }}
tags: ampache
- set_fact: ampache_current_version={{ current_version | default('') }}
tags: ampache
- import_tasks: ../includes/webapps_archive.yml
vars:
- root_dir: "{{ ampache_root_dir }}"
- version: "{{ ampache_current_version }}"
- db_name: "{{ ampache_mysql_db }}"
when: ampache_install_mode == 'upgrade'
tags: ampache
- name: Create directory structure
file: path={{ item }} state=directory
with_items:
- "{{ ampache_root_dir }}"
- "{{ ampache_root_dir }}/web"
- "{{ ampache_root_dir }}/tmp"
- "{{ ampache_root_dir }}/sessions"
- "{{ ampache_root_dir }}/meta"
- "{{ ampache_root_dir }}/logs"
- "{{ ampache_root_dir }}/data"
- "{{ ampache_root_dir }}/data/metadata"
- "{{ ampache_root_dir }}/data/music"
- "{{ ampache_root_dir }}/data/video"
- "{{ ampache_root_dir }}/backup"
failed_when: False # Don't fail when a fuse FS is mount on /music for example
tags: ampache
- when: ampache_install_mode != 'none'
block:
- name: Create tmp dir
file: path={{ ampache_root_dir }}/tmp/ampache state=directory
- name: Download Ampache
get_url:
url: "{{ ampache_zip_url }}"
dest: "{{ ampache_root_dir }}/tmp/"
checksum: "sha1:{{ ampache_zip_sha1 }}"
- name: Extract ampache archive
unarchive:
src: "{{ ampache_root_dir }}/tmp/ampache-{{ ampache_version }}_all.zip"
dest: "{{ ampache_root_dir }}/tmp/ampache"
remote_src: yes
- name: Move files to the correct directory
synchronize:
src: "{{ ampache_root_dir }}/tmp/ampache/"
dest: "{{ ampache_root_dir }}/web/"
delete: True
compress: False
delegate_to: "{{ inventory_hostname }}"
tags: ampache
- name: Check if htaccess files needs to be moved
stat: path={{ ampache_root_dir }}/web/public/{{ item }}/.htaccess.dist
with_items:
- channel
- play
- rest
register: htaccess
tags: ampache
- name: Rename htaccess files
command: mv -f {{ ampache_root_dir }}/web/public/{{ item.item }}/.htaccess.dist {{ ampache_root_dir }}/web/public/{{ item.item }}/.htaccess
with_items: "{{ htaccess.results }}"
when: item.stat.exists
tags: ampache
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ ampache_root_dir }}/meta/key.txt"
tags: ampache
- set_fact: ampache_key={{ rand_pass }}
tags: ampache
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ampache_root_dir }}/meta/ansible_dbpass"
when: ampache_mysql_pass is not defined
tags: ampache
- set_fact: ampache_mysql_pass={{ rand_pass }}
when: ampache_mysql_pass is not defined
tags: ampache
- import_tasks: ../includes/webapps_create_mysql_db.yml
vars:
- db_name: "{{ ampache_mysql_db }}"
- db_user: "{{ ampache_mysql_user }}"
- db_server: "{{ ampache_mysql_server }}"
- db_pass: "{{ ampache_mysql_pass }}"
tags: ampache
- name: Inject SQL structure
mysql_db:
name: "{{ ampache_mysql_db }}"
state: import
target: "{{ ampache_root_dir }}/web/sql/ampache.sql"
login_host: "{{ ampache_mysql_server }}"
login_user: sqladmin
login_password: "{{ mysql_admin_pass }}"
when: ampache_install_mode == 'install'
tags: ampache
- name: Deploy ampache configuration
template: src=ampache.cfg.php.j2 dest={{ ampache_root_dir }}/web/config/ampache.cfg.php group={{ ampache_php_user }} mode=640
tags: ampache
#- name: Upgrade SQL database
# command: php{{ ampache_php_version }} {{ ampache_root_dir }}/web/bin/cli admin:updateDatabase
# become_user: "{{ ampache_php_user }}"
# when: ampache_install_mode == 'upgrade'
# tags: ampache
- name: Grant admin privileges
command: mysql --host={{ ampache_mysql_server }} --user=sqladmin --password={{ mysql_admin_pass }} {{ ampache_mysql_db }} -e "UPDATE `user` SET `access`='100' WHERE `username`='{{ item }}'"
changed_when: False
become_user: "{{ ampache_php_user }}"
with_items: "{{ ampache_admin_users }}"
tags: ampache
- import_tasks: ../includes/webapps_webconf.yml
vars:
- app_id: ampache_{{ ampache_id }}
- php_version: "{{ ampache_php_version }}"
- php_fpm_pool: "{{ ampache_php_fpm_pool | default('') }}"
tags: ampache
- name: Deploy motd
template: src=motd.php.j2 dest={{ ampache_root_dir }}/web/config/motd.php
when: ampache_motd is defined
tags: ampache
- name: Remove motd
file: path={{ ampache_root_dir }}/web/config/motd.php state=absent
when: ampache_motd is not defined
tags: ampache
- name: Deploy cron scripts
template: src={{ item }}.j2 dest={{ ampache_root_dir }}/web/bin/{{ item }}
with_items:
- cron.sh
tags: ampache
- name: Enable cronjob
cron:
name: ampache_{{ ampache_id }}
special_time: daily
user: "{{ ampache_php_user }}"
job: "/bin/sh {{ ampache_root_dir }}/web/bin/cron.sh"
cron_file: ampache_{{ ampache_id }}
tags: ampache
- name: Deploy sso script
template: src=sso.php.j2 dest={{ ampache_root_dir }}/web/sso.php
tags: ampache
- name: Deploy backup scripts
template: src={{ item }}-backup.j2 dest=/etc/backup/{{ item }}.d/ampache_{{ ampache_id }} mode=750
loop:
- pre
- post
tags: ampache
- import_tasks: ../includes/webapps_compress_archive.yml
vars:
- root_dir: "{{ ampache_root_dir }}"
- version: "{{ ampache_current_version }}"
when: ampache_install_mode == 'upgrade'
tags: ampache
- import_tasks: ../includes/webapps_post.yml
vars:
- root_dir: "{{ ampache_root_dir }}"
- version: "{{ ampache_version }}"
tags: ampache
- name: Remove temp and obsolete files
file: path={{ item }} state=absent
with_items:
- "{{ ampache_root_dir }}/tmp/ampache-{{ ampache_version }}_all.zip"
- "{{ ampache_root_dir }}/tmp/ampache/"
- "{{ ampache_root_dir }}/db_dumps"
- /etc/backup/pre.d/ampache_{{ ampache_id }}_dump_db
- /etc/backup/post.d/ampache_{{ ampache_id }}_rm_dump
tags: ampache
...

View File

@ -0,0 +1,137 @@
config_version = {{ ampache_config_version }}
{% if ampache_local_web_path is defined %}
local_web_path = "{{ ampache_local_web_path }}"
{% endif %}
database_hostname = {{ ampache_mysql_server }}
{% if ampache_mysql_port is defined %}
database_port = "{{ ampache_mysql_port }}"
{% endif %}
database_name = "{{ ampache_mysql_db }}"
database_username = "{{ ampache_mysql_user }}"
database_password = "{{ ampache_mysql_pass }}"
secret_key = "{{ ampache_key }}"
session_length = 3600
stream_length = 7200
remember_length = 604800
session_name = ampache
session_cookielife = 0
auth_methods = "{{ ampache_auth_methods | join(',') }}"
{% if 'ldap' in ampache_auth_methods %}
ldap_url = "{{ ampache_ldap_url }}"
ldap_username = "{{ ampache_ldap_username }}"
ldap_password = "{{ ampache_ldap_password }}"
ldap_start_tls = "{{ ampache_ldap_starttls | ternary('true','false') }}"
ldap_search_dn = "{{ ampache_ldap_search_dn }}"
ldap_objectclass = "{{ ampache_ldap_objectclass }}"
ldap_filter = "{{ ampache_ldap_filter }}"
ldap_email_field = "{{ ampache_ldap_email_field }}"
ldap_name_field = "{{ ampache_ldap_name_field }}"
external_auto_update = "true"
{% endif %}
{% if ampache_logout_redirect is defined %}
logout_redirect = "{{ ampache_logout_redirect }}"
{% endif %}
access_control = "true"
require_session = "true"
require_localnet_session = "true"
metadata_order = "{{ ampache_metadata_order }}"
getid3_tag_order = "id3v2,id3v1,vorbiscomment,quicktime,matroska,ape,asf,avi,mpeg,riff"
deferred_ext_metadata = "false"
additional_genre_delimiters = "[/]{2}|[/\\\\|,;]"
catalog_file_pattern = "mp3|mpc|m4p|m4a|aac|ogg|oga|wav|aif|aiff|rm|wma|asf|flac|opus|spx|ra|ape|shn|wv"
catalog_video_pattern = "avi|mpg|mpeg|flv|m4v|mp4|webm|mkv|wmv|ogv|mov|divx|m2ts"
catalog_playlist_pattern = "m3u|m3u8|pls|asx|xspf"
catalog_prefix_pattern = "The|An|A|Das|Ein|Eine|Les|Le|La"
track_user_ip = "true"
allow_zip_download = "true"
allow_zip_types = "album"
use_auth = "true"
ratings = "false"
userflags = "true"
directplay = "true"
sociable = "false"
licensing = "false"
memory_cache = "true"
album_art_store_disk = "true"
local_metadata_dir = "{{ ampache_root_dir }}/data/metadata"
max_upload_size = 1048576
resize_images = "false"
art_order = "db,tags,folder,musicbrainz,lastfm,google"
lastfm_api_key = "{{ ampache_lastfm_api_key }}"
lastfm_api_secret = "{{ ampache_lastfm_api_secret }}"
channel = "false"
live_stream = "false"
refresh_limit = "60"
show_footer_statistics = "false"
debug = "true"
debug_level = 5
log_path = "{{ ampache_root_dir }}/logs/"
log_filename = "%name.%Y%m%d.log"
site_charset = "UTF-8"
{% if 'ldap' in ampache_auth_methods or 'http' in ampache_auth_methods %}
auto_create = "true"
auto_user = "user"
{% endif %}
allow_public_registration = "false"
generate_video_preview = "true"
max_bit_rate = {{ ampache_max_bit_rate }}
min_bit_rate = {{ ampache_min_bit_rate }}
transcode_m4a = {{ ampache_transcode_m4a }}
transcode_flac = {{ ampache_transcode_flac }}
transcode_mpc = {{ ampache_transcode_mpc }}
transcode_ogg = {{ ampache_transcode_ogg }}
transcode_oga = {{ ampache_transcode_oga }}
transcode_wav = {{ ampache_transcode_wav }}
transcode_wma = {{ ampache_transcode_wma }}
transcode_aif = {{ ampache_transcode_aif }}
transcode_aiff = {{ ampache_transcode_aiff }}
transcode_ape = {{ ampache_transcode_ape }}
transcode_shn = {{ ampache_transcode_shn }}
transcode_mp3 = {{ ampache_transcode_mp3 }}
transcode_avi = {{ ampache_transcode_avi }}
transcode_mkv = {{ ampache_transcode_mkv }}
transcode_mpg = {{ ampache_transcode_mpg }}
transcode_mpeg = {{ ampache_transcode_mpeg }}
transcode_m4v = {{ ampache_transcode_m4v }}
transcode_mp4 = {{ ampache_transcode_mp4 }}
transcode_mov = {{ ampache_transcode_mov }}
transcode_wmv = {{ ampache_transcode_wmv }}
transcode_ogv = {{ ampache_transcode_ogv }}
transcode_divx = {{ ampache_transcode_divx }}
transcode_m2ts = {{ ampache_transcode_m2ts }}
transcode_webm = {{ ampache_transcode_webm }}
transcode_flv = {{ ampache_transcode_flv }}
encode_target = {{ ampache_encode_target }}
encode_player_webplayer_target = {{ ampache_encode_player_webplayer }}
transcode_player_api_mp3 = {{ ampache_transcode_player_api_mp3 }}
encode_video_target = {{ ampache_encode_video_target }}
transcode_player_customize = "true"
transcode_cmd = "/bin/ffmpeg"
transcode_input = "-i %FILE%"
encode_args_mp3 = "-vn -b:a %BITRATE%K -c:a libmp3lame -f mp3 pipe:1"
encode_args_ogg = "-vn -b:a %BITRATE%K -c:a libvorbis -f ogg pipe:1"
encode_args_m4a = "-vn -b:a %BITRATE%K -c:a libfdk_aac -f adts pipe:1"
encode_args_wav = "-vn -b:a %BITRATE%K -c:a pcm_s16le -f wav pipe:1"
encode_args_opus = "-vn -b:a %BITRATE%K -c:a libopus -compression_level 10 -vsync 2 -f ogg pipe:1"
encode_args_flv = "-b:a %BITRATE%K -ar 44100 -ac 2 -v 0 -f flv -c:v libx264 -preset superfast -threads 0 pipe:1"
encode_args_webm = "-q %QUALITY% -f webm -c:v libvpx -maxrate %MAXBITRATE%k -preset superfast -threads 0 pipe:1"
encode_args_ts = "-q %QUALITY% -s %RESOLUTION% -f mpegts -c:v libx264 -c:a libmp3lame -maxrate %MAXBITRATE%k -preset superfast -threads 0 pipe:1"
encode_get_image = "-ss %TIME% -f image2 -vframes 1 pipe:1"
encode_srt = "-vf \"subtitles='%SRTFILE%'\""
encode_ss_frame = "-ss %TIME%"
encode_ss_duration = "-t %DURATION%"
force_ssl = "true"
common_abbr = "divx,xvid,dvdrip,hdtv,lol,axxo,repack,xor,pdtv,real,vtv,caph,2hd,proper,fqm,uncut,topaz,tvt,notv,fpn,fov,orenji,0tv,omicron,dsr,ws,sys,crimson,wat,hiqt,internal,brrip,boheme,vost,vostfr,fastsub,addiction,x264,LOL,720p,1080p,YIFY,evolve,fihtv,first,bokutox,bluray,tvboom,info"
mail_enable = "true"
mail_type = "sendmail"
mail_domain = "{{ ansible_domain }}"
{% if system_proxy is defined and system_proxy != '' %}
proxy_host = "{{ system_proxy | urlsplit('hostname') }}"
proxy_port = "{{ system_proxy | urlsplit('port') }}"
proxy_user = "{{ system_proxy | urlsplit('username') }}"
proxy_pass = "{{ system_proxy | urlsplit('password') }}"
{% endif %}
metadata_order_video = "filename,getID3"
registration_display_fields = "fullname,website"
registration_mandatory_fields = "fullnamep"
allow_upload_scripts = "false"

View File

@ -0,0 +1,31 @@
#!/bin/sh
# Rotate logs
find {{ ampache_root_dir }}/logs -type f -mtime +7 -exec rm -f "{}" \;
find {{ ampache_root_dir }}/logs -type f -mtime +1 -exec xz -T0 "{}" \;
# Do we have a previous filelist to compare against ?
PREV_HASH=$(cat {{ ampache_root_dir }}/tmp/data_hash.txt || echo 'none')
# Now, compute a hash of the filelist
NEW_HASH=$(find {{ ampache_root_dir }}/data/{music,video} | sha1sum | cut -d' ' -f1)
# Write new hash so we can compare next time
echo -n $NEW_HASH > {{ ampache_root_dir }}/tmp/data_hash.txt
# If file list has changed since last time, then update the catalog
if [ "$PREV_HASH" != "$NEW_HASH" ]; then
# Clean (remove files which doesn't exists anymore)
/bin/php{{ ampache_php_version }} {{ ampache_root_dir }}/web/bin/cli run:updateCatalog -c > /dev/null 2>&1
# Add (files added)
/bin/php{{ ampache_php_version }} {{ ampache_root_dir }}/web/bin/cli run:updateCatalog -a > /dev/null 2>&1
# Update graphics
/bin/php{{ ampache_php_version }} {{ ampache_root_dir }}/web/bin/cli run:updateCatalog -g > /dev/null 2>&1
fi
# Now check if files have changed recently. We can have the same file list, but metadata updates
NEW_FILES=$(find {{ ampache_root_dir }}/data/{music,video} -type f -mtime -1 | wc -l)
if [ "$NEW_FILES" -gt "0" ]; then
# Verify (update metadata)
/bin/php{{ ampache_php_version }} {{ ampache_root_dir }}/web/bin/cli run:updateCatalog -e > /dev/null 2>&1
fi

View File

@ -0,0 +1,27 @@
{% if ampache_alias is defined %}
Alias /{{ ampache_alias }} {{ ampache_root_dir }}/web/public
{% else %}
# No alias defined, create a vhost to access it
{% endif %}
RewriteEngine On
<Directory {{ ampache_root_dir }}/web/public>
AllowOverride All
Options FollowSymLinks
{% if ampache_allowed_ip is defined %}
Require ip {{ ampache_src_ip | join(' ') }}
{% else %}
Require all granted
{% endif %}
<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php-fpm/{{ ampache_php_fpm_pool | default('ampache_' + ampache_id | string) }}.sock|fcgi://localhost"
</FilesMatch>
<FilesMatch "(.maintenance.*|.ansible.*|.php_cs|.travis.*)">
Require all denied
</FilesMatch>
</Directory>
<Directory {{ ampache_root_dir }}/web/config>
Require all denied
</Directory>

View File

@ -0,0 +1,3 @@
<?php
echo '<a href="/sso.php">{{ ampache_motd }}</a>';

View File

@ -0,0 +1,15 @@
#!/bin/sh
restorecon -R {{ ampache_root_dir }}
chown root:root {{ ampache_root_dir }}
chmod 700 {{ ampache_root_dir }}
setfacl -k -b {{ ampache_root_dir }}
setfacl -m u:{{ ampache_php_user | default('apache') }}:rx,u:{{ httpd_user | default('apache') }}:rx {{ ampache_root_dir }}
chown -R root:root {{ ampache_root_dir }}/web
chown {{ ampache_php_user }} {{ ampache_root_dir }}/data
chown -R {{ ampache_php_user }} {{ ampache_root_dir }}/{tmp,sessions,logs,data/metadata}
chmod 700 {{ ampache_root_dir }}/{tmp,sessions,logs,data}
find {{ ampache_root_dir }}/web -type f -exec chmod 644 "{}" \;
find {{ ampache_root_dir }}/web -type d -exec chmod 755 "{}" \;
chown :{{ ampache_php_user }} {{ ampache_root_dir }}/web/config/ampache.cfg.php
chmod 640 {{ ampache_root_dir }}/web/config/ampache.cfg.php

View File

@ -0,0 +1,37 @@
; {{ ansible_managed }}
[ampache_{{ ampache_id }}]
listen.owner = root
listen.group = {{ httpd_user | default('apache') }}
listen.mode = 0660
listen = /run/php-fpm/ampache_{{ ampache_id }}.sock
user = {{ ampache_php_user }}
group = {{ ampache_php_user }}
catch_workers_output = yes
pm = dynamic
pm.max_children = 15
pm.start_servers = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 6
pm.max_requests = 5000
request_terminate_timeout = 60m
php_flag[display_errors] = off
php_admin_flag[log_errors] = on
php_admin_value[error_log] = syslog
php_admin_value[memory_limit] = 512M
php_admin_value[session.save_path] = {{ ampache_root_dir }}/sessions
php_admin_value[upload_tmp_dir] = {{ ampache_root_dir }}/tmp
php_admin_value[sys_temp_dir] = {{ ampache_root_dir }}/tmp
php_admin_value[post_max_size] = 5M
php_admin_value[upload_max_filesize] = 5M
php_admin_value[disable_functions] = system, show_source, symlink, dl, shell_exec, passthru, phpinfo, escapeshellarg, escapeshellcmd
php_admin_value[open_basedir] = {{ ampache_root_dir }}
php_admin_value[max_execution_time] = 1800
php_admin_value[max_input_time] = 60
php_admin_flag[allow_url_include] = off
php_admin_flag[allow_url_fopen] = on
php_admin_flag[file_uploads] = on
php_admin_flag[session.cookie_httponly] = on

View File

@ -0,0 +1,3 @@
#!/bin/sh
rm -f {{ ampache_root_dir }}/backup/*

View File

@ -0,0 +1,9 @@
#!/bin/sh
set -eo pipefail
/usr/bin/mysqldump --user={{ ampache_mysql_user | quote }} \
--password={{ ampache_mysql_pass | quote }} \
--host={{ ampache_mysql_server | quote }} \
--quick --single-transaction \
--add-drop-table {{ ampache_mysql_db | quote }} | zstd -c > {{ ampache_root_dir }}/backup/{{ ampache_mysql_db }}.sql.zst

View File

@ -0,0 +1,6 @@
<?php
# Just a dummy redirection so we can protect /sso.php with Lemonldap::NG
header('Location: /');
?>

View File

@ -0,0 +1,53 @@
---
# Version to deploy
appsmith_version: 1.5.25
# URL of the source archive
appsmith_archive_url: https://github.com/appsmithorg/appsmith/archive/v{{ appsmith_version }}.tar.gz
# sha1sum of the archive
appsmith_archive_sha1: dceebde21c7b0a989aa7fb96bac044df4f2ddf50
# Root directory where appsmith will be installed
appsmith_root_dir: /opt/appsmith
# Should ansible handle upgrades (True) or only initial install (False)
appsmith_manage_upgrade: True
# User account under which appsmith will run
appsmith_user: appsmith
# appsmith needs a redis server and a mongodb one
appsmith_redis_url: redis://localhost:6379
# A random one will be created and stored in the meta directory if not defined here
appsmith_mongo_user: appsmith
# appsmith_mongo_pass: S3cr3t.
# Note: if appsmith_mongo_pass is defined, it'll be used with appsmith_mongo_user to connect, even if not indicated in appsmith_mongo_url
# Else, anonymous connection is made. By default, if you do not set appsmith_mongo_pass, a random one will be created
# If you insist on using anonymous connections, you should set appsmith_mongo_pass to False
appsmith_mongo_url: mongodb://localhost/appsmith?retryWrites=true
# appsmith server component
appsmith_server_port: 8088
# List of IP/CIDR having access to appsmith_server_port
appsmith_server_src_ip: []
# Email settings
appsmith_email_from: noreply@{{ ansible_domain }}
appsmith_email_server: localhost
appsmith_email_port: 25
appsmith_email_tls: "{{ (appsmith_email_port == 587) | ternary(True,False) }}"
# appsmith_email_user: account
# appsmith_email_pass: S3Cr3T4m@1l
# Encryption settings. If not defined, random values will be created and used
# appsmith_encryption_pass: p@ssw0rd
# appsmith_encryption_salt: Salt
# Public URL used to access appsmith
appsmith_public_url: http://{{ inventory_hostname }}
# User signup can be disabled
appsmith_user_signup: True
# If signup is enabled, you can restrict which domains are allowed to signup (an empty list means no restriction)
appsmith_signup_whitelist: []
# If signup is disabled, you can set a list of whitelisted email which will be allowed
appsmith_admin_emails: []

View File

@ -0,0 +1,4 @@
---
- name: restart appsmith-server
service: name=appsmith-server state=restarted

View File

@ -0,0 +1,11 @@
---
dependencies:
- role: mkdir
- role: maven
- role: repo_mongodb
- role: redis_server
when: appsmith_redis_url | urlsplit('hostname') in ['localhost','127.0.0.1']
- role: mongodb_server
when: appsmith_mongo_url | urlsplit('hostname') in ['localhost','127.0.0.1']
- role: nginx

View File

@ -0,0 +1,10 @@
---
- name: Compress previous version
command: tar cf {{ appsmith_root_dir }}/archives/{{ appsmith_current_version }}.tar.zst --use-compress-program=zstd ./
environment:
ZST_CLEVEL: 10
args:
chdir: "{{ appsmith_root_dir }}/archives/{{ appsmith_current_version }}"
warn: False
tags: appsmith

View File

@ -0,0 +1,33 @@
---
- name: Create the archive dir
file:
path: "{{ appsmith_root_dir }}/archives/{{ appsmith_current_version }}"
state: directory
tags: appsmith
- name: Archive previous version
synchronize:
src: "{{ appsmith_root_dir }}/{{ item }}"
dest: "{{ appsmith_root_dir }}/archives/{{ appsmith_current_version }}"
recursive: True
delete: True
loop:
- server
- client
- etc
- meta
delegate_to: "{{ inventory_hostname }}"
tags: appsmith
- name: Dump mongo database
shell: |
mongodump --quiet \
--out {{ appsmith_root_dir }}/archives/{{ appsmith_current_version }}/ \
--uri \
{% if appsmith_mongo_pass is defined and appsmith_mongo_pass != False %}
{{ appsmith_mongo_url | urlsplit('scheme') }}://{{ appsmith_mongo_user }}:{{ appsmith_mongo_pass | urlencode | regex_replace('/','%2F') }}@{{ appsmith_mongo_url | urlsplit('hostname') }}{% if appsmith_mongo_url | urlsplit('port') %}:{{ appsmith_mongo_url | urlsplit('port') }}{% endif %}{{ appsmith_mongo_url | urlsplit('path') }}?{{ appsmith_mongo_url | urlsplit('query') }}
{% else %}
{{ appsmith_mongo_url }}
{% endif %}
tags: appsmith

View File

@ -0,0 +1,9 @@
---
- name: Remove tmp and unused files
file: path={{ item }} state=absent
loop:
- "{{ appsmith_root_dir }}/archives/{{ appsmith_current_version }}"
- "{{ appsmith_root_dir }}/tmp/appsmith-{{ appsmith_version }}"
- "{{ appsmith_root_dir }}/tmp/appsmith-{{ appsmith_version }}.tar.gz"
tags: appsmith

View File

@ -0,0 +1,30 @@
---
- name: Deploy appsmith server conf
template: src={{ item }}.j2 dest={{ appsmith_root_dir }}/etc/{{ item }} group={{ appsmith_user }} mode=640
loop:
- env
notify: restart appsmith-server
tags: appsmith
- name: Deploy nginx conf
template: src=nginx.conf.j2 dest=/etc/nginx/ansible_conf.d/appsmith.conf
notify: reload nginx
tags: appsmith
- name: Create the mongodb user
mongodb_user:
database: "{{ appsmith_mongo_url | urlsplit('path') | regex_replace('^\\/', '') }}"
name: "{{ appsmith_mongo_user }}"
password: "{{ appsmith_mongo_pass }}"
login_database: admin
login_host: "{{ appsmith_mongo_url | urlsplit('hostname') }}"
login_port: "{{ appsmith_mongo_url | urlsplit('port') | ternary(appsmith_mongo_url | urlsplit('port'),omit) }}"
login_user: mongoadmin
login_password: "{{ mongo_admin_pass }}"
roles:
- readWrite
when:
- appsmith_mongo_pass is defined
- appsmith_mongo_pass != False
tags: appsmith

View File

@ -0,0 +1,28 @@
---
- name: Create directories
file: path={{ item.dir }} state=directory owner={{ item.owner | default(omit) }} group={{ item.group | default(omit) }} mode={{ item.mode | default(omit) }}
loop:
- dir: "{{ appsmith_root_dir }}"
mode: 755
- dir: "{{ appsmith_root_dir }}/archives"
mode: 700
- dir: "{{ appsmith_root_dir }}/backup"
mode: 700
- dir: "{{ appsmith_root_dir }}/tmp"
owner: "{{ appsmith_user }}"
mode: 700
- dir: "{{ appsmith_root_dir }}/src"
owner: "{{ appsmith_user }}"
- dir: "{{ appsmith_root_dir }}/server"
owner: "{{ appsmith_user }}"
- dir: "{{ appsmith_root_dir }}/server/plugins"
owner: "{{ appsmith_user }}"
- dir: "{{ appsmith_root_dir }}/client"
- dir: "{{ appsmith_root_dir }}/meta"
mode: 700
- dir: "{{ appsmith_root_dir }}/etc"
group: "{{ appsmith_user }}"
mode: 750
- dir: "{{ appsmith_root_dir }}/bin"
tags: appsmith

View File

@ -0,0 +1,61 @@
---
# Detect installed version (if any)
- block:
- import_tasks: ../includes/webapps_set_install_mode.yml
vars:
- root_dir: "{{ appsmith_root_dir }}"
- version: "{{ appsmith_version }}"
- set_fact: appsmith_install_mode={{ (install_mode == 'upgrade' and not appsmith_manage_upgrade) | ternary('none',install_mode) }}
- set_fact: appsmith_current_version={{ current_version | default('') }}
tags: appsmith
# Create a random encryption password
- block:
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ appsmith_root_dir }}/meta/ansible_encryption_pass"
- set_fact: appsmith_encryption_pass={{ rand_pass }}
when: appsmith_encryption_pass is not defined
tags: appsmith
# Create a random encryption salt
- block:
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ appsmith_root_dir }}/meta/ansible_encryption_salt"
- complex: False
- pass_size: 10
- set_fact: appsmith_encryption_salt={{ rand_pass }}
when: appsmith_encryption_salt is not defined
tags: appsmith
- set_fact: appsmith_mongo_pass={{ appsmith_mongo_url | urlsplit('password') | urldecode }}
when:
- appsmith_mongo_pass is not defined
- appsmith_mongo_url | urlsplit('password') is string
tags: mongo
# Create a random password for mongo
- block:
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ appsmith_root_dir }}/meta/ansible_mongo_pass"
- set_fact: appsmith_mongo_pass={{ rand_pass }}
when: appsmith_mongo_pass is not defined
tags: appsmith
# Try to read mongo admin pass
- name: Check if mongo pass file exists
stat: path=/root/.mongo.pw
register: appsmith_mongo_pw
tags: appsmith
- when: appsmith_mongo_pw.stat.exists and mongo_admin_pass is not defined
block:
- slurp: src=/root/.mongo.pw
register: appsmith_mongo_admin_pass
- set_fact: mongo_admin_pass={{ appsmith_mongo_admin_pass.content | b64decode | trim }}
tags: appsmith
- fail: msg='mongo_admin_pass must be provided'
when: not appsmith_mongo_pw.stat.exists and mongo_admin_pass is not defined
tags: appsmith

View File

@ -0,0 +1,141 @@
---
- name: Install dependencies
yum:
name:
- nodejs
- java-11-openjdk
- java-11-openjdk-devel
- mongodb-org-tools
- make
- gcc-c++
tags: appsmith
- name: Detect exact JRE version
command: rpm -q java-11-openjdk
args:
warn: False
changed_when: False
register: appsmith_jre11_version
tags: appsmith
- name: Select JRE 11 as default version
alternatives:
name: "{{ item.name }}"
link: "{{ item.link }}"
path: "{{ item.path }}"
loop:
- name: java
link: /usr/bin/java
path: /usr/lib/jvm/{{ appsmith_jre11_version.stdout | trim }}/bin/java
- name: javac
link: /usr/bin/javac
path: /usr/lib/jvm/{{ appsmith_jre11_version.stdout | trim }}/bin/javac
- name: jre_openjdk
link: /usr/lib/jvm/jre-openjdk
path: /usr/lib/jvm/{{ appsmith_jre11_version.stdout | trim }}
- name: java_sdk_openjdk
link: /usr/lib/jvm/java-openjdk
path: /usr/lib/jvm/{{ appsmith_jre11_version.stdout | trim }}
tags: appsmith
- name: Stop the service during upgrade
service: name=appsmith-server state=stopped
when: appsmith_install_mode == 'upgrade'
tags: appsmith
- when: appsmith_install_mode != 'none'
block:
- name: Download appsmith
get_url:
url: "{{ appsmith_archive_url }}"
dest: "{{ appsmith_root_dir }}/tmp"
checksum: sha1:{{ appsmith_archive_sha1 }}
- name: Extract appsmith archive
unarchive:
src: "{{ appsmith_root_dir }}/tmp/appsmith-{{ appsmith_version }}.tar.gz"
dest: "{{ appsmith_root_dir }}/tmp"
remote_src: True
- name: Move sources
synchronize:
src: "{{ appsmith_root_dir }}/tmp/appsmith-{{ appsmith_version }}/"
dest: "{{ appsmith_root_dir }}/src/"
compress: False
delete: True
delegate_to: "{{ inventory_hostname }}"
- name: Compile the server
command: /opt/maven/apache-maven/bin/mvn -DskipTests clean package
args:
chdir: "{{ appsmith_root_dir }}/src/app/server"
- name: Remove previous server version
shell: find {{ appsmith_root_dir }}/server -name \*.jar -exec rm -f "{}" \;
- name: Copy server jar
copy: src={{ appsmith_root_dir }}/src/app/server/appsmith-server/target/server-1.0-SNAPSHOT.jar dest={{ appsmith_root_dir }}/server/ remote_src=True
notify: restart appsmith-server
- name: List plugins
shell: find {{ appsmith_root_dir }}/src/app/server/appsmith-*/*/target -maxdepth 1 -name \*.jar \! -name original\*
register: appsmith_plugins_jar
- name: Install plugins jar
copy: src={{ item }} dest={{ appsmith_root_dir }}/server/plugins/ remote_src=True
loop: "{{ appsmith_plugins_jar.stdout_lines }}"
- name: Install yarn
npm:
name: yarn
path: "{{ appsmith_root_dir }}/src/app/client"
- name: Install NodeJS dependencies
command: ./node_modules/yarn/bin/yarn install --ignore-engines
args:
chdir: "{{ appsmith_root_dir }}/src/app/client"
# Not sure why but yarn installs webpack 4.46.0 while appsmith wants 4.44.2
- name: Install correct webpack version
command: ./node_modules/yarn/bin/yarn add webpack@4.44.2 --ignore-engines
args:
chdir: "{{ appsmith_root_dir }}/src/app/client"
- name: Build the client
command: ./node_modules/.bin/craco --max-old-space-size=3072 build --config craco.build.config.js
args:
chdir: "{{ appsmith_root_dir }}/src/app/client"
# Note : the client will be deployed in {{ appsmith_root_dir }}/client
# with a ExecStartPre hook of the server, which will take care of replacing
# placeholders with current settings. So no need to do it here
become_user: "{{ appsmith_user }}"
tags: appsmith
- name: Deploy systemd unit
template: src={{ item }}.j2 dest=/etc/systemd/system/{{ item }}
loop:
- appsmith-server.service
register: appsmith_units
notify: restart appsmith-server
tags: appsmith
- name: Reload systemd
systemd: daemon_reload=True
when: appsmith_units.results | selectattr('changed','equalto',True) | list | length > 0
tags: appsmith
- name: Install pre-start script
template: src=pre-start.sh.j2 dest={{ appsmith_root_dir }}/bin/pre-start mode=755
notify: restart appsmith-server
tags: appsmith
- name: Install pre/post backup hoooks
template: src={{ item }}-backup.sh.j2 dest=/etc/backup/{{ item }}.d/appsmith mode=700
loop:
- pre
- post
tags: appsmith

View File

@ -0,0 +1,12 @@
---
- name: Handle appsmith ports in the firewall
iptables_raw:
name: "{{ item.name }}"
state: "{{ (item.src_ip | length > 0) | ternary('present','absent') }}"
rules: "-A INPUT -m state --state NEW -p tcp --dport {{ item.port }} -s {{ item.src_ip | join(',') }} -j ACCEPT"
loop:
- name: appsmith_server_port
port: "{{ appsmith_server_port }}"
src_ip: "{{ appsmith_server_src_ip }}"
tags: firewall,appsmith

View File

@ -0,0 +1,17 @@
---
- include: user.yml
- include: directories.yml
- include: facts.yml
- include: archive_pre.yml
when: appsmith_install_mode == 'upgrade'
- include: install.yml
- include: conf.yml
- include: iptables.yml
when: iptables_manage | default(True)
- include: services.yml
- include: write_version.yml
- include: archive_post.yml
when: appsmith_install_mode == 'upgrade'
- include: cleanup.yml

View File

@ -0,0 +1,7 @@
---
- name: Start and enable the services
service: name={{ item }} state=started enabled=True
loop:
- appsmith-server
tags: appsmith

View File

@ -0,0 +1,8 @@
---
- name: Create appsmith user
user:
name: "{{ appsmith_user }}"
home: "{{ appsmith_root_dir }}"
system: True
tags: appsmith

View File

@ -0,0 +1,5 @@
---
- name: Write installed version
copy: content={{ appsmith_version }} dest={{ appsmith_root_dir }}/meta/ansible_version
tags: appsmith

View File

@ -0,0 +1,35 @@
[Unit]
Description=Opensource framework to build app and workflows
After=syslog.target network.target mongodb.service redis.service
[Service]
Type=simple
User={{ appsmith_user }}
Group={{ appsmith_user }}
EnvironmentFile={{ appsmith_root_dir }}/etc/env
WorkingDirectory={{ appsmith_root_dir }}/server
PermissionsStartOnly=yes
ExecStartPre={{ appsmith_root_dir }}/bin/pre-start
ExecStart=/bin/java -Djava.net.preferIPv4Stack=true \
-Dserver.port={{ appsmith_server_port }} \
-Djava.security.egd="file:/dev/./urandom" \
{% if system_proxy is defined and system_proxy != '' %}
-Dhttp.proxyHost={{ system_proxy | urlsplit('hostname') }} \
-Dhttp.proxyPort={{ system_proxy | urlsplit('port') }} \
-Dhttps.proxyHost={{ system_proxy | urlsplit('hostname') }} \
-Dhttps.proxyPort={{ system_proxy | urlsplit('port') }} \
{% endif %}
-jar server-1.0-SNAPSHOT.jar
PrivateTmp=yes
ProtectSystem=full
ProtectHome=yes
NoNewPrivileges=yes
MemoryLimit=4096M
Restart=on-failure
StartLimitInterval=0
RestartSec=30
SyslogIdentifier=appsmith-server
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,25 @@
APPSMITH_MAIL_ENABLED=true
APPSMITH_MAIL_FROM={{ appsmith_email_from }}
APPSMITH_MAIL_HOST={{ appsmith_email_server }}
APPSMITH_MAIL_PORT={{ appsmith_email_port }}
APPSMITH_MAIL_SMTP_TLS_ENABLED={{ appsmith_email_tls | ternary('true','false') }}
{% if appsmith_email_user is defined and appsmith_email_pass is defined %}
APPSMITH_MAIL_SMTP_AUTH=true
APPSMITH_MAIL_USERNAME={{ appsmith_email_user }}
APPSMITH_MAIL_PASSWORD={{ appsmith_email_pass }}
{% endif %}
APPSMITH_REDIS_URL={{ appsmith_redis_url }}
{% if appsmith_mongo_user is defined and appsmith_mongo_pass is defined and appsmith_mongo_pass != False %}
{% set appsmith_mongo_url_obj = appsmith_mongo_url | urlsplit %}
APPSMITH_MONGODB_URI={{ appsmith_mongo_url_obj['scheme'] }}://{{ appsmith_mongo_user }}:{{ appsmith_mongo_pass | urlencode | regex_replace('/','%2F') }}@{{ appsmith_mongo_url_obj['hostname'] }}{% if appsmith_mongo_url_obj['port'] %}:{{ appsmith_mongo_url_obj['port'] }}{% endif %}{{ appsmith_mongo_url_obj['path'] }}?{{ appsmith_mongo_url_obj['query'] }}
{% else %}
APPSMITH_MONGODB_URI={{ appsmith_mongo_url }}
{% endif %}
APPSMITH_DISABLE_TELEMETRY=true
APPSMITH_ENCRYPTION_PASSWORD={{ appsmith_encryption_pass }}
APPSMITH_ENCRYPTION_SALT={{ appsmith_encryption_salt }}
APPSMITH_SIGNUP_DISABLED={{ appsmith_user_signup | ternary('false','true') }}
{% if appsmith_signup_whitelist | length > 0 and appsmith_user_signup %}
APPSMITH_SIGNUP_ALLOWED_DOMAINS={{ appsmith_signup_whitelist | join(',') }}
{% endif %}
APPSMITH_ADMIN_EMAILS={{ appsmith_admin_emails | join(',') }}

View File

@ -0,0 +1,34 @@
server {
listen 80;
server_name {{ appsmith_public_url | urlsplit('hostname') }};
include /etc/nginx/ansible_conf.d/acme.inc;
root {{ appsmith_root_dir }}/client;
client_max_body_size 10M;
if ($request_method !~ ^(GET|POST|HEAD|PUT|DELETE|PATCH)$ ) {
return 405;
}
# Send info about the original request to the backend
proxy_set_header X-Forwarded-For "$proxy_add_x_forwarded_for";
proxy_set_header X-Real-IP "$remote_addr";
proxy_set_header X-Forwarded-Proto "$scheme";
proxy_set_header X-Forwarded-Host "$host";
proxy_set_header Host "$host";
location / {
try_files $uri /index.html =404;
}
location /f {
proxy_pass https://cdn.optimizely.com/;
}
location /api {
proxy_pass http://127.0.0.1:{{ appsmith_server_port }};
}
location /oauth2 {
proxy_pass http://127.0.0.1:{{ appsmith_server_port }};
}
location /login {
proxy_pass http://127.0.0.1:{{ appsmith_server_port }};
}
}

View File

@ -0,0 +1,3 @@
#!/bin/bash -e
rm -rf {{ appsmith_root_dir }}/backup/*

View File

@ -0,0 +1,12 @@
#!/bin/sh
set -eo pipefail
mongodump \
{% if appsmith_mongo_pass is defined and appsmith_mongo_pass != False %}
{% set appsmith_mongo_url_obj = appsmith_mongo_url | urlsplit %}
--uri {{ appsmith_mongo_url_obj['scheme'] }}://{{ appsmith_mongo_user }}:{{ appsmith_mongo_pass | urlencode | regex_replace('/','%2F') }}@{{ appsmith_mongo_url_obj['hostname'] }}{% if appsmith_mongo_url_obj['port'] %}:{{ appsmith_mongo_url_obj['port'] }}{% endif %}{{ appsmith_mongo_url_obj['path'] }}?{{ appsmith_mongo_url_obj['query'] }} \
{% else %}
--uri {{ appsmith_mongo_url }} \
{% endif %}
--out {{ appsmith_root_dir }}/backup

View File

@ -0,0 +1,19 @@
#!/bin/bash -e
# If the conf changed since the last client deployement, or if the client build is newer than the one deployed, then re-deploy
if [ {{ appsmith_root_dir }}/etc/env -nt {{ appsmith_root_dir }}/client/ -o {{ appsmith_root_dir }}/src/app/client/build/ -nt {{ appsmith_root_dir }}/client/ ]; then
rsync -a --delete {{ appsmith_root_dir }}/src/app/client/build/ {{ appsmith_root_dir }}/client/
find {{ appsmith_root_dir }}/client/ -type f | xargs \
sed -i \
{% for var in [
"APPSMITH_SENTRY_DSN","APPSMITH_SMART_LOOK_ID","APPSMITH_OAUTH2_GOOGLE_CLIENT_ID",
"APPSMITH_OAUTH2_GITHUB_CLIENT_ID","APPSMITH_MARKETPLACE_ENABLED",
"APPSMITH_SEGMENT_KEY","APPSMITH_OPTIMIZELY_KEY","APPSMITH_ALGOLIA_API_ID",
"APPSMITH_ALGOLIA_SEARCH_INDEX_NAME","APPSMITH_ALGOLIA_API_KEY","APPSMITH_CLIENT_LOG_LEVEL",
"APPSMITH_GOOGLE_MAPS_API_KEY","APPSMITH_TNC_PP","APPSMITH_VERSION_ID",
"APPSMITH_VERSION_RELEASE_DATE","APPSMITH_INTERCOM_APP_ID","APPSMITH_MAIL_ENABLED","APPSMITH_DISABLE_TELEMETRY"] %}
-e "s/__{{ var }}__/${{ '{' ~ var ~ '}' }}/g"{% if not loop.last %} \{% endif %}
{% endfor %}
fi

View File

@ -0,0 +1,36 @@
---
# The shell of the lbkp account
backup_shell: '/bin/bash'
# List of commands lbkp will be allowed to run as root, with sudo
backup_sudo_base_commands:
- /usr/bin/rsync
- /usr/local/bin/pre-backup
- /usr/local/bin/post-backup
- /bin/tar
- /bin/gtar
backup_sudo_extra_commands: []
backup_sudo_commands: "{{ backup_sudo_base_commands + backup_sudo_extra_commands }}"
# List of ssh public keys to deploy
backup_ssh_keys: []
# Options to set for the ssh keys, to restrict what they can do
backup_ssh_keys_options:
- no-X11-forwarding
- no-agent-forwarding
- no-pty
# List of IP address allowed to use the ssh keys
# Empty list means no restriction
backup_src_ip: []
# Custom pre / post script
backup_pre_script: |
#!/bin/bash -e
# Nothing to do
backup_post_script: |
#!/bin/bash -e
# Nothing to do
...

View File

@ -0,0 +1,57 @@
#!/usr/bin/perl -w
# This script will backup the config of MegaRAID based
# RAID controllers. The saved config can be restored with
# MegaCli -CfgRestore -f /home/lbkp/mega_0.bin for example
# It also create a backup of the config as text, so you can
# manually check how things were configured at a certain point in time
# If MegaCli is not installed, then the script does nothing
use strict;
my $megacli = undef;
if (-x '/opt/MegaRAID/MegaCli/MegaCli64'){
$megacli = '/opt/MegaRAID/MegaCli/MegaCli64';
} elsif (-x '/opt/MegaRAID/MegaCli/MegaCli'){
$megacli = '/opt/MegaRAID/MegaCli/MegaCli';
}
if (!$megacli){
print "MegaCli not installed, nothing to do\n";
exit 0;
}
my $adapters = 0;
foreach (qx($megacli -adpCount -NoLog)) {
if ( m/Controller Count:\s*(\d+)/ ) {
$adapters = $1;
last;
}
}
foreach my $adp (0..$adapters-1){
my $hba = 0;
my $failgrouplist = 0;
foreach my $line (qx($megacli -CfgDsply -a$adp -NoLog)) {
if ( $line =~ m/Failed to get Disk Group list/ ) {
$failgrouplist = 1;
} elsif ( $line =~ m/Product Name:.*(JBOD|HBA)/ ) {
$hba = 1;
}
}
# Skip adapter if in HBA mode
next if ($hba && $failgrouplist);
# Save the config in binary format
print "Saving config for adapter $adp\n";
qx($megacli -CfgSave -f /home/lbkp/megaraid/cfg_$adp.bin -a$adp -NoLog);
die "Failed to backup conf for adapter $adp\n" unless ($? == 0);
# Now also save in text representation
open TXT, ">/home/lbkp/megaraid/cfg_$adp.txt";
print TXT foreach qx($megacli -CfgDsply -a$adp -NoLog);
die "Failed to backup Cfg text description for adapter $adp\n" unless ($? == 0);
close TXT;
}

View File

@ -0,0 +1,3 @@
#!/bin/sh
/bin/rpm -qa --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\n" | grep -v gpg-pubkey | sort > /home/lbkp/rpms.list

View File

@ -0,0 +1,15 @@
#!/bin/bash
if [ -d "/etc/backup/post.d" ]; then
for H in $(find /etc/backup/post.d -type f -o -type l | sort); do
if [ -x $H ]; then
echo "Running hook $H"
$H "$@"
echo "Finished hook $H"
else
echo "Skiping hook $H as it's not executable"
fi
done
fi
# Remove the lock
rm -f /var/lock/bkp.lock

View File

@ -0,0 +1,35 @@
#!/bin/bash
set -e
# 2 locks are needed. The first one ensure we don't run
# The pre-backup script twice. It's an atomic lock.
# Then we need a second lock which will last until the post-backup ran
# This one doesn't need to be atomic (as we already checked this)
PRELOCKFILE="/var/lock/pre-bkp.lock"
exec 200>$PRELOCKFILE
flock -n 200 || ( echo "Couldn't aquire pre-backup lock" && exit 1 )
PID=$$
echo $PID 1>&200
if [ -e /var/lock/bkp.lock ]; then
# Consider the lock to be stale if it's older than 8 hours
if [ "$(( $(date +"%s") - $(stat -c "%Y" /var/lock/bkp.lock) ))" -gt "28800" ]; then
rm /var/lock/bkp.lock
else
echo "Another backup is running"
exit 1
fi
fi
touch /var/lock/bkp.lock
if [ -d "/etc/backup/pre.d" ]; then
for H in $(find /etc/backup/pre.d -type f -o -type l | sort); do
if [ -x $H ]; then
echo "Running hook $H"
$H "$@"
echo "Finished hook $H"
else
echo "Skiping hook $H as it's not executable"
fi
done
fi

View File

@ -0,0 +1,3 @@
#!/bin/bash -e
rm -f /home/lbkp/megaraid/*

View File

@ -0,0 +1,94 @@
---
- name: Install backup tools
yum: name=rsync
when: ansible_os_family == 'RedHat'
- name: Install backup tools
apt: name=rsync
when: ansible_os_family == 'Debian'
- name: Create a local backup user account
user: name=lbkp comment="Local backup account" system=yes shell={{ backup_shell }}
tags: backup
- name: Deploy sudo configuration
template: src=sudo.j2 dest=/etc/sudoers.d/backup mode=400
tags: backup
- name: Deploy SSH keys for the backup account
authorized_key:
user: lbkp
key: "{{ backup_ssh_keys | join(\"\n\") }}"
key_options: "{{ backup_ssh_keys_options | join(',') }}"
exclusive: yes
when: backup_src_ip is not defined or backup_src_ip | length < 1
tags: backup
- name: Deploy SSH keys for the backup account (with source IP restriction)
authorized_key:
user: lbkp
key: "{{ backup_ssh_keys | join(\"\n\") }}"
key_options: "from=\"{{ backup_src_ip | join(',') }}\",{{ backup_ssh_keys_options | join(',') }}"
exclusive: yes
when:
- backup_src_ip is defined
- backup_src_ip | length > 0
tags: backup
- name: Create pre and post backup hook dir
file: path={{ item }} state=directory mode=750
with_items:
- /etc/backup/pre.d
- /etc/backup/post.d
tags: backup
- name: Deploy default pre/post backup hooks
copy:
content: "{{ item.content }}"
dest: /etc/backup/{{ item.type }}.d/default
mode: 0755
loop:
- type: pre
content: "{{ backup_pre_script }}"
- type: post
content: "{{ backup_post_script }}"
tags: backup
- name: Copy pre-backup script
copy: src={{ item }} dest=/usr/local/bin/{{ item }} mode=750 group=lbkp
with_items:
- pre-backup
- post-backup
tags: backup
- name: Deploy rpm dump list script
copy: src=dump-rpms-list dest=/etc/backup/pre.d/dump-rpms-list mode=755
when: ansible_os_family == 'RedHat'
tags: backup
- name: Create megaraid dump dir
file: path=/home/lbkp/megaraid state=directory
tags: backup
- name: Deploy MegaCli backup scripts
copy: src={{ item.script }} dest=/etc/backup/{{ item.type }}.d/{{ item.script }} mode=750
with_items:
- script: dump-megaraid-cfg
type: pre
- script: rm-megaraid-cfg
type: post
when: lsi_controllers | default([]) | length > 0
tags: backup
- name: Excludes for proxmox backup client
copy:
dest: /.pxarexclude
content: |
var/log/lastlog
when:
- ansible_virtualization_role == 'guest'
- ansible_virtualization_type == 'lxc' or ansible_virtualization_type == 'systemd-nspawn'
tags: backup
...

View File

@ -0,0 +1,2 @@
Defaults:lbkp !requiretty
lbkp ALL=(root) NOPASSWD: {{ backup_sudo_commands | join(',') }}

View File

@ -0,0 +1,19 @@
---
# You can choose either 3 or 4
bpc_major_version: 3
# Auth to access BackupPC. Can be basic, lemonldap, lemonldap2 or none
bpc_auth: basic
# List of IP address allowed
bpc_src_ip: []
# Should backuppc be started on boot ?
# You might want to turn this off if for example you must unlock
# the device on which you have your backup, and manually start backuppc after that
bpc_enabled: True
# Should /BackupPC aliases be added on the main vhost ?
# You might want to, but you can also disable this and grant access only through a dedicated vhost
bpc_alias_on_main_vhost: True

View File

@ -0,0 +1,5 @@
---
- include: ../httpd_common/handlers/main.yml
...

View File

@ -0,0 +1,3 @@
---
dependencies:
- { role: httpd_front }

View File

@ -0,0 +1,53 @@
---
- name: Install BackupPC 4
yum:
name:
- BackupPC4
- fuse-backuppcfs4
when: bpc_major_version == 4
tags: bpc
- name: Install BackupPC 3
yum:
name:
- BackupPC
- fuse-backuppcfs
when: bpc_major_version != 4
tags: bpc
- name: Install tools
yum:
name:
- rsync
- tar
- samba-client
- openssh-clients
- BackupPC-server-scripts
- fuse-chunkfs
tags: bpc
- name: Deploy httpd conf
template: src=httpd.conf.j2 dest=/etc/httpd/ansible_conf.d/40-BackupPC.conf
notify: reload httpd
tags: bpc
- name: Deploy sudo config
template: src=sudoers.j2 dest=/etc/sudoers.d/backuppc mode=0400
tags: bpc
- name: Create SSH Key
user:
name: backuppc
generate_ssh_key: yes
ssh_key_bits: 4096
tags: bpc
- name: Start the service
service: name=backuppc state=started
when: bpc_enabled
tags: bpc
- name: Handle backuppc service status
service: name=backuppc enabled={{ bpc_enabled }}
tags: bpc

View File

@ -0,0 +1,25 @@
<Directory /usr/share/BackupPC/>
SSLRequireSSL on
{% if bpc_auth == "lemonldap" %}
PerlHeaderParserHandler Lemonldap::NG::Handler
{% elif bpc_auth == "lemonldap2" %}
PerlHeaderParserHandler Lemonldap::NG::Handler::ApacheMP2
{% elif bpc_auth == "basic" %}
AuthType Basic
AuthUserFile /etc/BackupPC/apache.users
AuthName "BackupPC"
Require valid-user
{% endif %}
{% if bpc_src_ip | length < 1 %}
Require all denied
{% else %}
Require ip {{ bpc_src_ip | join(' ') }}
{% endif %}
</Directory>
{% if bpc_auth != False and bpc_auth != 'none' and bpc_alias_on_main_vhost == True %}
Alias /BackupPC/images /usr/share/BackupPC/html/
ScriptAlias /BackupPC /usr/share/BackupPC/sbin/BackupPC_Admin
ScriptAlias /backuppc /usr/share/BackupPC/sbin/BackupPC_Admin
{% endif %}

View File

@ -0,0 +1,3 @@
Defaults:backuppc !requiretty
Cmnd_Alias BACKUPPC = /usr/bin/rsync, /bin/tar, /bin/gtar, /usr/local/bin/pre-backup, /usr/local/bin/post-backup, /usr/bin/virt-backup
backuppc ALL=(root) NOPASSWD: BACKUPPC

View File

@ -0,0 +1,78 @@
---
# Version to deploy
bookstack_version: '21.11.2'
# URL of the arhive
bookstack_archive_url: https://github.com/BookStackApp/BookStack/archive/v{{ bookstack_version }}.tar.gz
# Expected sha1 of the archive
bookstack_archive_sha1: c9e8a0da936f7a2840c416dde70451f046e2b7f3
# Should ansible handle bookstack upgrades or just the inintial install
bookstack_manage_upgrade: True
# We can deploy several bookstack instance on a single host
# each one can have a different ID which can be a simple number
# or a short string
bookstack_id: 1
# Where to install bookstack
bookstack_root_dir: /opt/bookstack_{{ bookstack_id }}
# User under which the app will be executed
bookstack_php_user: php-bookstack_{{ bookstack_id }}
# Version of PHP used
bookstack_php_version: 80
# Or you can specify here the name of a custom PHP FPM pool. See the httpd_php role
# bookstack_php_fpm_pool: custom_bookstack
# If defined, an alias will be added in httpd's config to access bookstack
# Else, you'll have to defined a vhost to make bookstack accessible. See httpd_common role
bookstack_web_alias: /bookstack_{{ bookstack_id }}
# You can restrict access to bookstack. If not defined or empty,
# no restriction will be made
bookstack_src_ip: "{{ httpd_ssl_src_ip | default(httpd_src_ip) | default([]) }}"
# List of trusted proxies from which we can trust the X-Forwarded-For header
# Useful to get real client IP when BookStack is running behind a reverse proxy
# bookstack_trusted_proxies:
# - 10.99.2.10
# The default value is to use the same as bookstack_src_ip if it's not empty and doesn't contain 0.0.0.0/0
bookstack_trusted_proxies: "{{ (bookstack_src_ip | length > 0 and '0.0.0.0/0' not in bookstack_src_ip) | ternary(bookstack_src_ip, []) }}"
# MySQL Database
bookstack_db_server: "{{ mysql_server | default('locaclhost') }}"
bookstack_db_port: 3306
bookstack_db_user: bookstack_{{ bookstack_id }}
bookstack_db_name: bookstack_{{ bookstack_id }}
# If no pass is defined, a random one will be created and stored in meta/ansible_dbpass
# bookstack_db_pass: S3cr3t.
# Application key. If not defined, a random one will be generated and store in meta/ansible_app_key
# bookstack_app_key: base64:H/zDPBqtK2BjOkgCrMMGGH+sSjOBrBs/ibcD4ozQc90=
# Public URL of the app
bookstack_public_url: http://{{ inventory_hostname }}/bookstack_{{ bookstack_id }}
# Email settings. Default will use local postfix installation
bookstack_email_name: BookStack
bookstack_email_from: no-reply@{{ ansible_domain }}
bookstack_email_server: localhost
bookstack_email_port: 25
# You can set user and pass if needed
# bookstack_email_user: user@example.org
# bookstack_email_pass: S3cR3t.
# Encryption can be tls, ssl or null
bookstack_email_encryption: 'null'
# Default lang
bookstack_default_lang: fr
# Session lifetime, in minutes
bookstack_session_lifetime: 480
# You can set custom directive with this:
# bookstack_settings:
# AUTH_METHOD: saml2
# SAML2_NAME: SSO
# SAML2_EMAIL_ATTRIBUTE: email
bookstack_settings: {}

View File

@ -0,0 +1,8 @@
---
allow_duplicates: True
dependencies:
- role: mkdir
- role: mysql_server
when: bookstack_db_server in ['localhost','127.0.0.1']
- role: composer

View File

@ -0,0 +1,10 @@
---
- name: Compress previous version
command: tar cf {{ bookstack_root_dir }}/archives/{{ bookstack_current_version }}.tar.zst ./ --use-compress-program=zstd
args:
chdir: "{{ bookstack_root_dir }}/archives/{{ bookstack_current_version }}"
warn: False
environment:
ZSTD_CLEVEL: 10
tags: bookstack

View File

@ -0,0 +1,31 @@
---
- name: Create the archive dir
file: path={{ bookstack_root_dir }}/archives/{{ bookstack_current_version }} state=directory
tags: bookstack
- name: Archive current version
synchronize:
src: "{{ bookstack_root_dir }}/app"
dest: "{{ bookstack_root_dir }}/archives/{{ bookstack_current_version }}/"
compress: False
delete: True
rsync_opts:
- '--exclude=/storage/'
delegate_to: "{{ inventory_hostname }}"
tags: bookstack
- name: Dump the database
mysql_db:
state: dump
name: "{{ bookstack_db_name }}"
target: "{{ bookstack_root_dir }}/archives/{{ bookstack_current_version }}/{{ bookstack_db_name }}.sql.xz"
login_host: "{{ bookstack_db_server }}"
login_user: "{{ bookstack_db_user }}"
login_password: "{{ bookstack_db_pass }}"
quick: True
single_transaction: True
environment:
XZ_OPT: -T0
tags: bookstack

View File

@ -0,0 +1,9 @@
---
- name: Remove tmp and obsolete files
file: path={{ item }} state=absent
loop:
- "{{ bookstack_root_dir }}/archives/{{ bookstack_current_version }}"
- "{{ bookstack_root_dir }}/tmp/BookStack-{{ bookstack_version }}"
- "{{ bookstack_root_dir }}/tmp/BookStack-{{ bookstack_version }}.tar.gz"
tags: bookstack

View File

@ -0,0 +1,54 @@
---
- import_tasks: ../includes/webapps_webconf.yml
vars:
- app_id: bookstack_{{ bookstack_id }}
- php_version: "{{ bookstack_php_version }}"
- php_fpm_pool: "{{ bookstack_php_fpm_pool | default('') }}"
tags: bookstack
- when: bookstack_app_key is not defined
block:
- name: Generate a uniq application key
shell: /bin/php{{ bookstack_php_version }} {{ bookstack_root_dir }}/app/artisan key:generate --show > {{ bookstack_root_dir }}/meta/ansible_app_key
args:
creates: "{{ bookstack_root_dir }}/meta/ansible_app_key"
- name: Read application key
slurp: src={{ bookstack_root_dir }}/meta/ansible_app_key
register: bookstack_rand_app_key
- set_fact: bookstack_app_key={{ bookstack_rand_app_key.content | b64decode | trim }}
tags: bookstack
- name: Deploy BookStack configuration
template: src=env.j2 dest={{ bookstack_root_dir }}/app/.env group={{ bookstack_php_user }} mode=640
tags: bookstack
- when: bookstack_install_mode != 'none'
block:
- name: Migrate the database
shell: echo yes | /bin/php{{ bookstack_php_version }} {{ bookstack_root_dir }}/app/artisan migrate
- name: Clear cache
command: /bin/php{{ bookstack_php_version }} {{ bookstack_root_dir }}/app/artisan cache:clear
- name: Clear views
command: /bin/php{{ bookstack_php_version }} {{ bookstack_root_dir }}/app/artisan view:clear
- name: Regenerate search
command: /bin/php{{ bookstack_php_version }} {{ bookstack_root_dir }}/app/artisan bookstack:regenerate-search
become_user: "{{ bookstack_php_user }}"
tags: bookstack
- name: Deploy permission script
template: src=perms.sh.j2 dest={{ bookstack_root_dir }}/perms.sh mode=755
register: bookstack_perm_script
tags: bookstack
- name: Apply permissions
command: "{{ bookstack_root_dir }}/perms.sh"
when: bookstack_perm_script.changed or bookstack_install_mode != 'none'
tags: bookstack

View File

@ -0,0 +1,23 @@
---
- name: Create required directories
file: path={{ item.dir }} state=directory owner={{ item.owner | default(omit) }} group={{ item.group | default(omit) }} mode={{ item.mode | default(omit) }}
loop:
- dir: "{{ bookstack_root_dir }}"
- dir: "{{ bookstack_root_dir }}/meta"
mode: 700
- dir: "{{ bookstack_root_dir }}/backup"
mode: 700
- dir: "{{ bookstack_root_dir }}/archives"
mode: 700
- dir: "{{ bookstack_root_dir }}/app"
- dir: "{{ bookstack_root_dir }}/sessions"
group: "{{ bookstack_php_user }}"
mode: 770
- dir: "{{ bookstack_root_dir }}/tmp"
group: "{{ bookstack_php_user }}"
mode: 770
- dir: "{{ bookstack_root_dir }}/data"
group: "{{ bookstack_php_user }}"
mod: 700
tags: bookstack

View File

@ -0,0 +1,20 @@
---
# Detect installed version (if any)
- block:
- import_tasks: ../includes/webapps_set_install_mode.yml
vars:
- root_dir: "{{ bookstack_root_dir }}"
- version: "{{ bookstack_version }}"
- set_fact: bookstack_install_mode={{ (install_mode == 'upgrade' and not bookstack_manage_upgrade) | ternary('none',install_mode) }}
- set_fact: bookstack_current_version={{ current_version | default('') }}
tags: bookstack
# Create a random pass for the DB if needed
- block:
- import_tasks: ../includes/get_rand_pass.yml
vars:
- pass_file: "{{ bookstack_root_dir }}/meta/ansible_dbpass"
- set_fact: bookstack_db_pass={{ rand_pass }}
when: bookstack_db_pass is not defined
tags: bookstack

View File

@ -0,0 +1,86 @@
---
- name: Install needed tools
package:
name:
- acl
- tar
- zstd
- mariadb
tags: bookstack
- when: bookstack_install_mode != 'none'
block:
- name: Download bookstack
get_url:
url: "{{ bookstack_archive_url }}"
dest: "{{ bookstack_root_dir }}/tmp"
checksum: sha1:{{ bookstack_archive_sha1 }}
- name: Extract the archive
unarchive:
src: "{{ bookstack_root_dir }}/tmp/BookStack-{{ bookstack_version }}.tar.gz"
dest: "{{ bookstack_root_dir }}/tmp"
remote_src: True
- name: Move BookStack to its final dir
synchronize:
src: "{{ bookstack_root_dir }}/tmp/BookStack-{{ bookstack_version }}/"
dest: "{{ bookstack_root_dir }}/app/"
delete: True
compress: False
rsync_opts:
- '--exclude=/storage/'
- '--exclude=/public/uploads/'
delegate_to: "{{ inventory_hostname }}"
- name: Populate data directories
synchronize:
src: "{{ bookstack_root_dir }}/tmp/BookStack-{{ bookstack_version }}/{{ item }}"
dest: "{{ bookstack_root_dir }}/data/"
compress: False
delegate_to: "{{ inventory_hostname }}"
loop:
- storage
- public/uploads
- name: Link data directories
file: src={{ item.src }} dest={{ item.dest }} state=link
loop:
- src: "{{ bookstack_root_dir }}/data/storage"
dest: "{{ bookstack_root_dir }}/app/storage"
- src: "{{ bookstack_root_dir }}/data/uploads"
dest: "{{ bookstack_root_dir }}/app/public/uploads"
- name: Install PHP libs with composer
composer:
command: install
working_dir: "{{ bookstack_root_dir }}/app"
executable: /bin/php{{ bookstack_php_version }}
environment:
php: /bin/php{{ bookstack_php_version }}
tags: bookstack
- import_tasks: ../includes/webapps_create_mysql_db.yml
vars:
- db_name: "{{ bookstack_db_name }}"
- db_user: "{{ bookstack_db_user }}"
- db_server: "{{ bookstack_db_server }}"
- db_pass: "{{ bookstack_db_pass }}"
tags: bookstack
- name: Set correct SELinux context
sefcontext:
target: "{{ bookstack_root_dir }}(/.*)?"
setype: httpd_sys_content_t
state: present
when: ansible_selinux.status == 'enabled'
tags: bookstack
- name: Install pre/post backup hooks
template: src={{ item }}-backup.j2 dest=/etc/backup/{{ item }}.d/bookstack_{{ bookstack_id }} mode=700
loop:
- pre
- post
tags: bookstack

View File

@ -0,0 +1,13 @@
---
- include: user.yml
- include: directories.yml
- include: facts.yml
- include: archive_pre.yml
when: bookstack_install_mode == 'upgrade'
- include: install.yml
- include: conf.yml
- include: write_version.yml
- include: archive_post.yml
when: bookstack_install_mode == 'upgrade'
- include: cleanup.yml

View File

@ -0,0 +1,5 @@
---
- name: Create user account
user: name={{ bookstack_php_user }} system=True shell=/sbin/nologin home={{ bookstack_root_dir }}
tags: bookstack

View File

@ -0,0 +1,5 @@
---
- name: Write current version
copy: content={{ bookstack_version }} dest={{ bookstack_root_dir }}/meta/ansible_version
tags: bookstack

View File

@ -0,0 +1,28 @@
APP_KEY={{ bookstack_app_key }}
APP_URL={{ bookstack_public_url }}
DB_HOST={{ bookstack_db_server }}
DB_DATABASE={{ bookstack_db_name }}
DB_USERNAME={{ bookstack_db_user }}
DB_PASSWORD={{ bookstack_db_pass | quote }}
MAIL_DRIVER=smtp
MAIL_FROM_NAME="{{ bookstack_email_name }}"
MAIL_FROM={{ bookstack_email_from }}
MAIL_HOST={{ bookstack_email_server }}
MAIL_PORT={{ bookstack_email_port }}
{% if bookstack_email_user is defined and bookstack_email_pass is defined %}
MAIL_USERNAME={{ bookstack_email_user }}
MAIL_PASSWORD={{ bookstack_email_pass | quote }}
{% endif %}
MAIL_ENCRYPTION={{ bookstack_email_encryption }}
APP_TIMEZONE={{ system_tz | default('UTC') }}
APP_LANG={{ bookstack_default_lang }}
SESSION_SECURE_COOKIE={{ (bookstack_public_url | urlsplit('scheme') == 'https') | ternary('true','false') }}
SESSION_COOKIE_NAME=bookstack_{{ bookstack_id }}_session
SESSION_LIFETIME={{ bookstack_session_lifetime }}
CACHE_PREFIX=bookstack_{{ bookstack_id }}
{% if bookstack_trusted_proxies | length > 0 %}
APP_PROXIES={{ bookstack_trusted_proxies | join(',') }}
{% endif %}
{% for key in bookstack_settings.keys() | list %}
{{ key }}="{{ bookstack_settings[key] }}"
{% endfor %}

View File

@ -0,0 +1,39 @@
{% if bookstack_web_alias is defined and bookstack_web_alias != False %}
Alias /{{ bookstack_web_alias | regex_replace('^/','') }} {{ bookstack_root_dir }}/app/public
{% else %}
# No alias defined, create a vhost to access it
{% endif %}
<Directory {{ bookstack_root_dir }}/app/public>
AllowOverride All
Options FollowSymLinks
{% if bookstack_src_ip is defined and bookstack_src_ip | length > 0 %}
Require ip {{ bookstack_src_ip | join(' ') }}
{% else %}
Require all granted
{% endif %}
<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php-fpm/{{ bookstack_php_fpm_pool | default('bookstack_' + bookstack_id | string) }}.sock|fcgi://localhost"
</FilesMatch>
RewriteEngine On
# Handle Authorization Header
RewriteCond %{HTTP:Authorization} .
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
# Redirect Trailing Slashes If Not A Folder...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} (.+)/$
RewriteRule ^ %1 [L,R=301]
# Send Requests To Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
<FilesMatch "(\.git.*)">
Require all denied
</FilesMatch>
</Directory>

View File

@ -0,0 +1,19 @@
#!/bin/bash
restorecon -R {{ bookstack_root_dir }}
chown root:root {{ bookstack_root_dir }}
chmod 700 {{ bookstack_root_dir }}
setfacl -R -k -b {{ bookstack_root_dir }}
setfacl -m u:{{ bookstack_php_user | default('apache') }}:rx,u:{{ httpd_user | default('apache') }}:x {{ bookstack_root_dir }}
find {{ bookstack_root_dir }}/app -type f -exec chmod 644 "{}" \;
find {{ bookstack_root_dir }}/app -type d -exec chmod 755 "{}" \;
chown root:{{ bookstack_php_user }} {{ bookstack_root_dir }}/app/.env
chmod 640 {{ bookstack_root_dir }}/app/.env
chown -R {{ bookstack_php_user }} {{ bookstack_root_dir }}/app/bootstrap/cache
chmod 700 {{ bookstack_root_dir }}/app/bootstrap/cache
chown -R {{ bookstack_php_user }} {{ bookstack_root_dir }}/data
chmod 700 {{ bookstack_root_dir }}/data
setfacl -R -m u:{{ httpd_user | default('apache') }}:rx {{ bookstack_root_dir }}/app/public
setfacl -m u:{{ httpd_user | default('apache') }}:x {{ bookstack_root_dir }}/data/
setfacl -R -m u:{{ httpd_user | default('apache') }}:rx {{ bookstack_root_dir }}/data/uploads
find {{ bookstack_root_dir }} -name .htaccess -exec chmod 644 "{}" \;

View File

@ -0,0 +1,35 @@
[bookstack_{{ bookstack_id }}]
listen.owner = root
listen.group = apache
listen.mode = 0660
listen = /run/php-fpm/bookstack_{{ bookstack_id }}.sock
user = {{ bookstack_php_user }}
group = {{ bookstack_php_user }}
catch_workers_output = yes
pm = dynamic
pm.max_children = 15
pm.start_servers = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 6
pm.max_requests = 5000
request_terminate_timeout = 5m
php_flag[display_errors] = off
php_admin_flag[log_errors] = on
php_admin_value[error_log] = syslog
php_admin_value[memory_limit] = 256M
php_admin_value[session.save_path] = {{ bookstack_root_dir }}/sessions
php_admin_value[upload_tmp_dir] = {{ bookstack_root_dir }}/tmp
php_admin_value[sys_temp_dir] = {{ bookstack_root_dir }}/tmp
php_admin_value[post_max_size] = 100M
php_admin_value[upload_max_filesize] = 100M
php_admin_value[disable_functions] = system, show_source, symlink, exec, dl, shell_exec, passthru, phpinfo, escapeshellarg, escapeshellcmd
php_admin_value[open_basedir] = {{ bookstack_root_dir }}:/usr/share/pear/:/usr/share/php/
php_admin_value[max_execution_time] = 60
php_admin_value[max_input_time] = 60
php_admin_flag[allow_url_include] = off
php_admin_flag[allow_url_fopen] = off
php_admin_flag[file_uploads] = on
php_admin_flag[session.cookie_httponly] = on

View File

@ -0,0 +1,3 @@
#!/bin/bash -e
rm -f {{ bookstack_root_dir }}/backup/*.sql.zst

View File

@ -0,0 +1,13 @@
#!/bin/sh
set -eo pipefail
/usr/bin/mysqldump \
{% if bookstack_db_server not in ['localhost','127.0.0.1'] %}
--user={{ bookstack_db_user | quote }} \
--password={{ bookstack_db_pass | quote }} \
--host={{ bookstack_db_server | quote }} \
--port={{ bookstack_db_port | quote }} \
{% endif %}
--quick --single-transaction \
--add-drop-table {{ bookstack_db_name | quote }} | zstd -c > {{ bookstack_root_dir }}/backup/{{ bookstack_db_name }}.sql.zst

View File

@ -0,0 +1,16 @@
---
clam_mirror: database.clamav.net
clam_user: clamav
clam_group: clamav
clam_enable_clamd: False
clam_custom_db_url: []
clam_safebrowsing: True
clam_listen_port: 3310
clam_ports: "{{ [clam_listen_port] + [clam_stream_port_min + ':' + clam_stream_port_max] }}"
clam_listen_ip: 127.0.0.1
clam_src_ip: []
# Max stream size, in MB
clam_stream_max_size: 50
clam_stream_port_min: 30000
clam_stream_port_max: 32000

View File

@ -0,0 +1,9 @@
---
- include: ../common/handlers/main.yml
- name: restart freshclam
service: name=freshclam state=restarted
- name: restart clamd
service: name=clamd state={{ clam_enable_clamd | ternary('restarted','stopped') }}

View File

@ -0,0 +1,57 @@
---
- name: Install packages
yum:
name:
- clamav
- clamav-data-empty
- clamav-server-systemd
- clamav-update
- name: Create clamav user account
user:
name: clamav
system: True
shell: /sbin/nologin
comment: "ClamAV antivirus user account"
- name: Set SELinux
seboolean: name={{ item }} state=True persistent=True
with_items:
- clamd_use_jit
- antivirus_can_scan_system
when: ansible_selinux.status == 'enabled'
- name: Deploy freshclam configuration
template: src=freshclam.conf.j2 dest=/etc/freshclam.conf mode=644
notify: restart freshclam
- name: Deploy clamd configuration
template: src=clamd.conf.j2 dest=/etc/clamd.conf
notify: restart clamd
- name: Deploy systemd units
template: src={{ item }}.j2 dest=/etc/systemd/system/{{ item }}
with_items:
- freshclam.service
- clamd.service
notify:
- restart freshclam
- restart clamd
register: clamav_units
- name: Deploy tmpfiles.d fragment
copy:
content: 'd /run/clamav 755 {{ clam_user }} {{ clam_group }}'
dest: /etc/tmpfiles.d/clamav.conf
notify: systemd-tmpfiles
- name: Reload systemd
command: systemctl daemon-reload
when: clamav_units.changed
- name: Start and enable freshclam
service: name=freshclam state=started enabled=True
- name: Handle clamd service
service: name=clamd state={{ clam_enable_clamd | ternary('started','stopped') }} enabled={{ clam_enable_clamd }}

Some files were not shown because too many files have changed in this diff Show More