For production deployment, we recommend to deploy all components in containers, including dependencies, using native cloud services or orchestration systems such as Kubernetes.
To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
Use Docker
Deploy OpenCTI using Docker and the default docker-compose.yml provided
in the docker.
Before running the docker-compose command, the docker-compose.yml file should be configured. By default, the docker-compose.yml file is using environment variables available in the file .env.sample.
You can either rename the file .env.sample as .env and enter the values or just directly edit the docker-compose.yml with the values for your environment.
Configuration static parameters
The complete list of available static parameters is available in the configuration section.
Here is an example to quickly generate the .env file under Linux, especially all the default UUIDv4:
If your docker-compose deployment does not support .env files, just export all environment variables before launching the platform:
export$(cat.env|grep-v"#"|xargs)
As OpenCTI has a dependency on ElasticSearch, you have to set vm.max_map_count before running the containers, as mentioned in the ElasticSearch documentation.
sudosysctl-wvm.max_map_count=1048575
To make this parameter persistent, add the following to the end of your /etc/sysctl.conf:
vm.max_map_count=1048575
Persist data
The default for OpenCTI data is to be persistent.
In docker-compose.yml, you will find the list of necessary persistent volumes for the dependencies at the end:
volumes:esdata:# ElasticSearch datas3data:# S3 bucket dataredisdata:# Redis dataamqpdata:# RabbitMQ data
Run OpenCTI
Using single node Docker
After changing your .env file run docker-compose in detached (-d) mode:
sudosystemctlstartdocker.service
# Run docker-compose in detacheddocker-composeup-d
Using Docker swarm
In order to have the best experience with Docker, we recommend using the Docker stack feature. In this mode you will have the capacity to easily scale your deployment.
# If your virtual machine is not a part of a Swarm cluster, please use:dockerswarminit
Put your environment variables in /etc/environment:
# If you already exported your variables to .env from above:sudocat.env>>/etc/environment
sudobash-c'cat .env >> /etc/environment'sudodockerstackdeploy--compose-filedocker-compose.ymlopencti
Installation done
You can now go to http://localhost:8080 and log in with the credentials configured in your environment variables.
Manual installation
Prerequisites
Installation of dependencies
You have to install all the needed dependencies for the main application and the workers. The example below is for Debian-based systems:
If your OS supports libc (Ubuntu, Debian, ...) you have to install the opencti-release_{RELEASE_VERSION}.tar.gz version.
If your OS uses musl (Alpine, ...) you have to install the opencti-release-{RELEASE_VERSION}_musl.tar.gz version.
For Windows:
We don't provide any Windows release for now. However it is still possible to check the code out, manually install the dependencies and build the software.
Change the config/production.json file according to your configuration of ElasticSearch, Redis, RabbitMQ and S3 bucket as well as default credentials (the ADMIN_TOKEN must be a valid UUID).
The application is just a NodeJS process, the creation of the database schema and the migration will be done at starting.
Please verify that yarn version is greater than 4 and node version is greater or equals to v19.
Please note that some Node.js version are outdated in linux package manager, you can download a recent one in https://nodejs.org/en/download or alternatively nvm can help to chose a recent version of Node.js https://github.com/nvm-sh/nvm
yarn--version
#4.1.0node--version
#v20.11.1
Once Node.js is setup, you can build and run with (from inside opencti folder):
yarninstall
yarnbuild
yarnserv
The default username and password are those you have put in the config/production.json file.
Install the worker
The OpenCTI worker is used to write the data coming from the RabbitMQ messages broker.
OpenCTI platform is based on a NodeJS runtime, with a memory limit of 8GB by default. If you encounter OutOfMemory exceptions, this limit could be changed:
-NODE_OPTIONS=--max-old-space-size=8096
Workers and connectors
OpenCTI workers and connectors are Python processes. If you want to limit the memory of the process, we recommend to directly use Docker to do that. You can find more information in the official Docker documentation.
ElasticSearch
ElasticSearch is also a JAVA process. In order to setup the JAVA memory allocation, you can use the environment variable ES_JAVA_OPTS. You can find more information in the official ElasticSearch documentation.
Redis
Redis has a very small footprint on keys but will consume memory for the stream. By default the size of the stream is limited to 2 millions which represents a memory footprint around 8 GB. You can find more information in the Redis docker hub.
MinIO / S3 Bucket
MinIO is a small process and does not require a high amount of memory. More information are available for Linux here on the Kernel tuning guide.
RabbitMQ
The RabbitMQ memory configuration can be find in the RabbitMQ official documentation. RabbitMQ will consumed memory until a specific threshold, therefore it should be configure along with the Docker memory limitation.