Installation

Get Help

If you are a Codenvy customer, you can open an email ticket for 24/7/365 support.

If you are having a problem starting Codenvy or workspaces, there are two diagnostic utilities that can help: docker run codenvy/cli info on the command-line for diagnosing boot-time issues and a “diagnostic” page that you can launch from the lower corner of the dashboard that loads when Codenvy first opens in your browser.

We want everyone to have a great experience installing and running Codenvy. If you run into an issue, please open a GitHub issue providing:

  • Output of ‘docker run codenvy/cli info’ command
  • If requested, a support package with ‘docker run codenvy/cli info –bundle’

Quick Start

With Docker 1.11+ (1.12.5+ recommended) on Windows, Mac, or Linux:

$ docker run codenvy/cli start

This gives you additional instructions on how to run the Codenvy CLI while setting your hostname, configuring volume mounts, and testing your Docker setup. For full install syntax see below.

Licensing

Codenvy starts with a Fair Source 3 license, which allows up to three users in an organization to use Codenvy with full functionality and limited liabilities and warranties. You can request a trial license from Codenvy for more than 3 users or purchase one from our friendly sales team (sales@codenvy.com). Once you gain the license, start Codenvy and then apply the license in the admin dashboard that is accessible with your login credentials.

Licenses require the host to be connected to the internet. If you require a license for a system that isn’t connected to the internet please contact sales@codenvy.com.

Installation

We abbreviate codenvy [COMMAND], for the full docker run ... syntax for readability.

Sample Start

An example to install and start Codenvy data saved on a Windows file system at C:\tmp

docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /c/tmp:/data codenvy/cli start

This installs a Codenvy configuration, downloads Codenvy’s Docker images, run pre-flight port checks, boot Codenvy’s services, and run post-flight checks. You do not need root access to start Codenvy, unless your environment requires it for Docker operations.

A successful start will display:

INFO: Proxy: HTTP_PROXY=, HTTPS_PROXY=, NO_PROXY=*.local, 169.254/16
INFO: (codenvy cli): nightly - using docker 1.13.0 / docker4windows
INFO: (codenvy restart): Restarting...
INFO: (codenvy stop): Stopping containers...
INFO: (codenvy stop): Removing containers...
INFO: (codenvy config): Generating codenvy configuration...
INFO: (codenvy config): Customizing docker-compose for running in a container
INFO: (codenvy start): Preflight checks
         mem (1.5 GiB):           [OK]
         disk (100 MB):           [OK]
         port 80 (http):          [AVAILABLE]
         port 443 (https):        [AVAILABLE]
         port 2181 (zookeeper):   [AVAILABLE]
         port 5000 (registry):    [AVAILABLE]
         port 23750 (socat):      [AVAILABLE]
         port 23751 (swarm):      [AVAILABLE]
         port 32000 (jmx):        [AVAILABLE]
         port 32001 (jmx):        [AVAILABLE]
         conn (browser => ws):    [OK]
         conn (server => ws):     [OK]

INFO: (codenvy start): Starting containers...
INFO: (codenvy start): Services booting...
INFO: (codenvy start): Server logs at "docker logs -f codenvy_codenvy_1"
INFO: (codenvy start): Postflight checks
         (10.0.75.2:23750/info):  [OK]

INFO: (codenvy start): Booted and reachable
INFO: (codenvy start): Ver: nightly
INFO: (codenvy start): Use: http://10.0.75.2:80
INFO: (codenvy start): API: http://10.0.75.2:80/swagger```

The administrative login is case-sensitive: - User: admin - Password: password

Logs and User Data

When Codenvy initializes itself, it stores logs, user data, database data, and instance-specific configuration in the folder mounted to :/data/instance or an instance subfolder of what you mounted to :/data.

Codenvy’s containers save their logs in the same location:

/instance/logs/codenvy/<year>               # Server logs
/instance/logs/codenvy/che-machine-logs     # Workspace logs
/instance/logs/nginx                        # nginx access and error logs
/instance/logs/haproxy                      # HAproxy logs

User data is stored in:

/instance/data/codenvy                      # Project backups (we synchronize projs from remote ws here)
/instance/data/postgres                     # Postgres data folder (users, workspaces, stacks etc)
/instance/data/registry                     # Workspace snapshots

Instance configuration is generated by Codenvy and is updated by our internal configuration utilities. These ‘generated’ configuration files should not be modified:

/instance/codenvy.ver.do_not_modify         # Version of Codenvy installed
/instance/docker-compose-container.yml      # Docker compose to launch internal services
/instance/docker-compose.yml                # Docker compose to launch Codenvy from the host without contianer
/instance/config                            # Configuration files which are input mounted into the containers

Versions

Each version of Codenvy is available as a Docker image tagged with a label that matches the version, such as codenvy/cli:5.0.0-M7. You can see all versions available by running docker run codenvy/cli version or by browsing DockerHub.

We maintain “redirection” labels which reference special versions of Codenvy:

Variable Description
latest The most recent stable release of Codenvy.
5.0.0-latest The most recent stable release of Codenvy on the 5.x branch.
nightly The nightly build of Codenvy.

The software referenced by these labels can change over time. Since Docker will cache images locally, the codenvy/cli:<version> image that you are running locally may not be current with the one cached on DockerHub. Additionally, the codenvy/cli:<version> image that you are running references a manifest of Docker images that Codenvy depends upon, which can also change if you are using these special redirection tags.

In the case of ‘latest’ images, when you initialize an installation using the CLI, we encode your /instance/codenvy.ver file with the numbered version that latest references. If you begin using a CLI version that mismatches what was installed, you will be presented with an error.

To avoid issues that can appear from using ‘nightly’ or ‘latest’ redirectoins, you may:

  1. Verify that you have the most recent version with docker pull eclipse/cli:<version>.
  2. When running the CLI, commands that use other Docker images have an optional --pull and --force command line option which will instruct the CLI to check DockerHub for a newer version and pull it down. Using these flags will slow down performance, but ensures that your local cache is current.

If you are running Codenvy using a tagged version that is a not a redirection label, such as 5.0.0-M7, then these caching issues will not happen.

Volume Mounts

We use volume mounts to configure certain parts of Codenvy. The presence or absence of certain volume mounts will trigger certain behaviors in the system. For example, you can volume mount a Codenvy source git repository with :/repo to active development mode where we start Codenvy’s containers using source code from the repository instead of the software inside of the default containers.

At a minimum, you must volume mount a local path to :/data, which will be the location that Codenvy installs its configuration, user data, version and log information. Codenvy also leaves behind a cli.log file in this location to debug any odd behaviors while running the system. In this folder we also create a codenvy.env file which contains all of the admin configuration that you can set or override in a single location.

You can also use volume mounts to override the location of where your user or backup data is stored. By default, these folders will be created as sub-folders of the location that you mount to :/data. However, if you do not want your /instance, and /backup folder to be children, you can set them individually with separate overrides.

docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock
                    -v <local-path>:/data
                    -v <a-different-path>:/data/instance
                    -v <another-path>:/data/backup
                       codenvy/cli:<version> [COMMAND]

Hostnames

The IP address or DNS name of where the Codenvy endpoint will service your users. Codenvy will attempt to auto-set the hostname (CODENVY_HOST) by running an internal utility docker run --net=host eclipse/che-ip:nightly. This approach is not fool-proof. This utility is usually accurate on desktops, but often fails on hosted servers. If it fails you can explicitly set this value when executing the docker run:

docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock
                    -v <local-path>:/data
                    -e CODENVY_HOST=<your-ip-or-host>
                       codenvy/cli:<version> [COMMAND]

Alternatively, you can edit the CODENVY_HOST value in codenvy.env.

Proxy Installation

Codenvy can be installed and operated from behind a proxy:

  1. Configure each physical node’s Docker daemon with proxy access.
  2. Optionally, override the default workspace proxy settings for users if you want to restrict their Internet access.

Before starting Codenvy, configure Docker’s daemon for proxy access. If you plan to scale Codenvy with multiple host nodes, each host node must have its Docker daemon configured for proxy access. If you have Docker for Windows or Docker for Mac installed on your desktop and installing Codenvy, these utilities have a GUI in their settings which let you set the proxy settings directly.

HTTP_PROXY and/or HTTPS_PROXY set in the Docker daemon must have a protocol and port number. Proxy configuration is quite finnicky, so please ensure you have provided a fully qualified proxy location.

If you configure HTTP_PROXY or HTTPS_PROXY in your Docker daemon, Codenvy automatically adds localhost,127.0.0.1,codenvy-swarm,CODENVY_HOST to your NO_PROXY value where CODENVY_HOST is the DNS or IP address. We recommend that you check these values in your codenvy.env and add the short and long form DNS entry to your Docker’s NO_PROXY setting if it is not already set.

This is the full set of proxy-related values in the codenvy.env. You can optionally modify these with different values.

CODENVY_HTTP_PROXY_FOR_CODENVY=<YOUR_PROXY_FROM_DOCKER>
CODENVY_HTTPS_PROXY_FOR_CODENVY=<YOUR_PROXY_FROM_DOCKER>
CODENVY_NO_PROXY_FOR_CODENVY=localhost,127.0.0.1,codenvy-swarm,<YOUR_CODENVY_HOST>

CODENVY_HTTP_PROXY_FOR_CODENVY_WORKSPACES=<YOUR_PROXY_FROM_DOCKER>
CODENVY_HTTPS_PROXY_FOR_CODENVY_WORKSPACES=<YOUR_PROXY_FROM_DOCKER>
CODENVY_NO_PROXY_FOR_CODENVY_WORKSPACES=localhost,127.0.0.1,<YOUR_CODENVY_HOST>,<YOUR_MAVEN_REPO>,<YOUR_YUM_REPO>

The last three entries are injected into workspaces created by your users. This gives your users access to the Internet from within their workspaces. You can comment out these entries to disable access. However, if that access is turned off, then the default templates with source code will fail to be created in workspaces as those projects are cloned from GitHub.com. Your workspaces are still functional, we just prevent the template cloning. If you use any custom yum or Maven repos in workspaces you may need to add them no_proxy for Codenvy workspaces.

If you create a workspace from a custom recipe, and there are any sudo commands being executed as part of Dockerfile instructions, make sure you use sudo -E, for example sudo -E apt-get install python -y

DNS Resolution

The default behavior is for Codenvy and its workspaces to inherit DNS resolver servers from the host. You can override these resolvers by setting CODENVY_DNS_RESOLVERS in the codenvy.env file and restarting Codenvy. DNS resolvers allow programs and services that are deployed within a user workspace to perform DNS lookups with public or internal resolver servers. In some environments, custom resolution of DNS entries (usually to an internal DNS provider) is required to enable the Codenvy server and the workspace runtimes to have lookup ability for internal services.

# Update your codenvy.env with comma separated list of resolvers:
CODENVY_DNS_RESOLVERS=10.10.10.10,8.8.8.8

Firewall Tests

Firewalls will typically cause traffic problems to appear when you are starting a new workspace or adding a new physical node for scaling. There are certain network configurations where we direct networking traffice between workspaces and Codenvy through external IP addresses, which can flow through routers or firewalls. If ports or protocols are blocked, then certain Codenvy functions will be unavailable.

Running Codenvy Behind a Firewall (Linux/Mac)

# Check to see if firewall is running:
systemctl status firewalld

# Check for list of open ports
# Verify that ports 80tcp, 443tcp, 2376tcp, 4789udp, 7946tcp/udp, 23750tcp, 32768-65535tcp are open
firewall-cmd --list-ports

# Optionally open ports on your local firewall:
firewall-cmd --permanent --add-port=80/tcp
... and so on

# You can also verify that ports are open:
nmap -Pn -p <port> localhost

# If the port is closed, then you need to open it by editing /etc/pf.conf.
# For example, open port 1234 for TCP for all interfaces:
pass in proto tcp from any to any port 1234

# And then restart your firewall

If you are going to be scaling Codenvy with additional workspace nodes, then each workspace node also needs to have ports 2375 tcp, 2376 tcp, 4789 udp, 7946 tcp/udp, and 32768-65535 tcp are open on each node.

If you are going to use the embedded Zabbix monitor that is deployed with Codenvy, then you must also have port 10050 tcp open on the master node and the workspace nodes.

Running Codenvy Behind a Firewall (Windows)

There are many third party firewall services. Different versions of Windows OS also have different firewall configurations. The built-in Windows firewall can be configured in the control panel under “System and Security”:

  1. In the left pane, right-click Inbound Rules, and then click New Rule in the action pane.
  2. In the Rule Type dialog box, select Port, and then click Next.
  3. In the Protocol and Ports dialog box, select TCP.
  4. Select speicfic local ports, enter the port number to be opened and click Next.
  5. In the Action dialog box, select Allow the Connection, and then click Next.
  6. In the Name dialog box, type a name and description for this rule, and then click Finish.

Offline Installation

We support offline (disconnected from the Internet) installation and operation. This is helpful for restricted environments, regulated datacenters, or offshore installations. The offline installation downloads the CLI, core system images, and any stack images while you are within a network DMZ with DockerHub access. You can then move those files to a secure environment and start Codenvy.

1. Save Codenvy Images

While connected to the Internet, download Codenvy’s Docker images:

docker run codenvy/cli offline

The CLI will download images and save them to /backup/*.tar with each image saved as its own file. You can save these files to a differnet location by volume mounting a local folder to :/data/backup. The version tag of the CLI Docker image will be used to determine which versions of dependent images to download. There is about 1GB of data that will be saved.

The default execution will download none of the optional stack images, which are needed to launch workspaces of a particular type. There are a few dozen stacks for different programming languages and some of them are over 1GB in size. It is unlikely that your users will need all of the stacks, so you do not need to download all of them. You can get a list of available stack images by running codenvy offline --list. You can download a specific stack by running codenvy offline --image:<image-name> and the --image flag can be repeatedly used on a single command line.

2. Start Codenvy In Offline Mode

Place the TAR files into a folder in the offline computer. If the files are in placed in a folder named /tmp/offline, you can run Codenvy in offline mode with:

# Load the CLI
docker load < /tmp/offline/codenvy_cli:<version>.tar

# Start Codenvy in offline mode
docker run <other-properties> -v /tmp/offline:/data/backup codenvy/cli:<version> start --offline

The --offline parameter instructs Codenvy CLI to load all of the TAR files located in the folder mounted to /data/backup. These images will then be used instead of routing out to the Internet to check for DockerHub. The preboot sequence takes place before any CLI functions make use of Docker. The codenvy start, codenvy download, and codenvy init commands support --offline mode which triggers this preboot seequence.

Uninstall

# Remove your Codevy configuration and destroy user projects and database
docker run codenvy/cli:<version> destroy [--quiet|--cli]

# Delete Codenvy's images from your Docker registry
docker run codenvy/cli:<version> rmi

# Delete the Codenvy CLI
docker rmi -f codenvy/cli

System Requirements

Codenvy installs on Linux, Mac and Windows.

Hardware

The Codenvy server requires a minimum of:

  • 2 cores
  • 4GB RAM
  • 3GB disk space

Codenvy services require 2 GB storage and 4 GB RAM. The RAM, CPU and storage resources required for your users’ workspaces are additive. Codenvy’s Docker images consume ~900MB of disk and the Docker images for your workspace templates can each range from 5MB up to 1.5GB. Codenvy and its dependent core containers will consume about 500MB of RAM, and your running workspaces will each require at least 250MB RAM, depending upon user requirements and complexity of the workspace code and intellisense. Java workspaces, for example, typically require ~750MB for workspace agents.

Boot2Docker, docker-machine, Docker for Windows, and Docker for Mac are all Docker variations that launch VMs with Docker running in the VM with access to Docker from your host. We recommend increasing your default VM size to at least 4GB. Each of these technologies have different ways to allow host folder mounting into the VM. Please enable this for your OS so that Codenvy data is persisted on your host disk.

Software

  • Docker 1.11+ (1.12.5+ recommended)

The Codenvy CLI - a Docker image - manages the other Docker images and supporting utilities that Codenvy uses during its configuration or operations phases. The CLI also provides utilities for downloading an offline bundle to run Codenvy while disconnected from the network.

Given the nature of the development and release cycle it is important that you have the latest version of docker installed because any issue that you encounter might have already been fixed with a newer Docker release.

Install the most recent version of the Docker Engine for your platform using the official Docker releases, including support for Mac and Windows! If you are on Linux, you can also install with wget -qO- https://get.docker.com/ | sh.

Sometimes Fedora and RHEL/CentOS users will encounter issues with SElinux. Try disabling selinux with setenforce 0 and check if resolves the issue. If using the latest docker version and/or disabling selinux does not fix the issue then please file a issue request on the issues page. If you are a licensed customer of Codenvy, you can get prioritized support with support@codenvy.com.

IP Addresses

The hostname or IP address that you give to the Codenvy master node (and any optional workspace nodes) must be externally reachable by each browser. In scalability mode, you can create a cluster of workspace nodes by connecting different Docker daemons together. Even though the cluster is an internal object, each workspace node must be listening on a publicly reachable IP address or hostname.

Required Ports

Codenvy’s runtime launches a group of Docker containers in a compose relationship. The master node is where Codenvy is installed and running. In a scalability mode, you can add additional physical “machine” nodes which runs the developer to increase system capacity.

Master Node

If you have not added any additional physical workspace nodes, then the Codenvy master node runs core services and workspaces.

AIO.png

Master Node: External Ports

All ports are TCP unless otherwise noted.

Port »»»»»»»» Service »»»»»»»» Notes
80 / 443 HAProxy HTTP/S HTTP is the default. If you configure HTTP/S, then port 80 can be closed.
5000 Docker Registry Embedded registry to save workspace snapshots. This port is not required if you configure an external registry or have not added additional workspace nodes.
23750 Docker On a fresh install, Docker is running on port 23750, and Swarm reaches it at $CODENVY_HOST:23750. If a master node isn’t used to run workspaces, this port can be filtered.
32768-65535 Docker and Codenvy Agent Users who launch servers in their workspace bind to ephemeral ports in this range. This range can be limited by changing the size of the ephemeral range on the host OS.

Master Node: Internal Ports

All ports are TCP unless otherwise noted.

Port »»»» Service »»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»
81 Nginx
2181 ZooKeeper
2375 Docker
5432 Postgres
8080 Codenvy Server

Machines Nodes

You can add as many workspace nodes as required to handle additional demand.

master_plus_node.png

Machines Node: External Ports

All ports are TCP unless otherwise noted.

Port »»»»»»»» Service »»»»»»»» Notes
80 / 443 HAProxy HTTP/S HTTP is the default. If you configure HTTP/S, then port 80 can be closed.
32768-65535 Docker and Codenvy Agents Users who launch servers in their workspace bind to ephemeral ports in this range. This range can be limited by changing the size of the ephemeral range on the host OS.

The Docker daemon will need to be remotely accessed by Codenvy, so it has to be setup to use a TCP socket. This port only needs to be accessible to the Codenvy master node.

Machines Node: Internal Ports

All ports are TCP unless otherwise noted.

Port Service »»»»»»»»»»»»»»»»» Notes
2375 Docker Daemon Swarm should be able to reach Docker daemon from the master node. If Master node is in a different network, this port should be extrenally accessible.
4789 Docker Overlay (UDP) Workspace nodes use this port to create overlay networks. If Workspace nodes are is different networks, this port should be externally accessible.
7946 Docker Overlay (TCP + UDP) Workspace nodes use this port to create overlay networks. If Workspace nodes are is different networks, this port should be externally accessible.