Installation#

Prerequisites#

Supported operating system#

Currently supported system is Debian 13 for amd64 architecture. We recommend using a minimal installation without any additional packages.

You can download it from: https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/

Hardware requirements#

For manager: At least 2 GB of RAM, 2 CPU cores, and 20 GB of disk space.

For analyser: At least 2 GB of RAM, 4 CPU cores and 20 GB of disk space.

For worker: At least 8 GB of RAM, 8 CPU cores and 20 GB of disk space. CPU with IOMMU support. Additionally, a compatible NIC that supports DPDK is required.

Virtualization is not officially supported. You may however try to run worker inside a VM with PCI passthrough, but performance may vary.

List of NICs supported by DPDK: https://core.dpdk.org/supported/nics/ (this list contains drivers, not actual NIC models)

Important

Before you proceed with installation, please check if your NIC has up-to-date firmware. If not, please update it first.

For example, for Intel X7xx series NICs, you can use: https://www.intel.com/content/www/us/en/download/18190/non-volatile-memory-nvm-update-utility-for-intel-ethernet-network-adapter-700-series.html

For E8xx: https://www.intel.com/content/www/us/en/download/19626/non-volatile-memory-nvm-update-utility-for-intel-ethernet-network-adapters-e810-series-linux.html

Check your vendor website for more details about firmware updates for your NIC model.

APT repository#

In order to install LiveShield, you need to add our APT repository to your system. Please run all commands as root or using sudo.

  1. Download installation script

    wget https://apt.liveshield.net/install.sh -O /tmp/install.sh
    

    if you don’t have wget installed, please type first:

    apt update && apt install -y wget
    
  2. Run installation script

    chmod +x /tmp/install.sh
    /tmp/install.sh
    
  3. Decide which modules you want to install

    1. To install all modules run:

      apt install -y liveshield
      
    2. To install Manager, run:

      apt install -y liveshield-manager
      
    3. To install Analyser, run:

      apt install -y liveshield-analyser
      
    4. To install Worker, run:

      apt install -y liveshield-worker
      

Installation wizard#

After installation, you have to run the installation wizard (as root):

liveshield-installer

This wizard will guide you through the initial configuration of the installed modules. You have to run it on each machine where you have installed LiveShield modules.

Welcome to LiveShield installer
Please select which module you would like to (re)configure:
[1] All
[2] Worker
[3] Analyser
[4] Manager
[5] Worker+Analyser
[6] Analyser+Manager
[7] Quit

If you have installed multiple modules on the same machine, you can select option 1 (All) to configure all of them at once. In all other cases, select the module you want to configure.

In this guide we will use option 1 (All) as it’s the easiest way of installing and configuring all modules.

What do you want to (re)configure?
[1] All of them
[2] Database connection credentials
[3] Nginx template
[4] Init app modules and database schema
[5] Reset admin password
[6] Update app modules and migrate database schema
[7] Quit

Here is the manager installation wizard. We suggest using the first option 1 (All of them) to configure everything at once.

Do you want to use a local postgresql and influxdb database? Do you want to configure it automatically? [y/n]:

If you want to use database servers located on the same machine as manager, type ‘y’. This is recommended.

After few seconds you should see:

Manager database configuration has been finished!

###############################################################
#            Initial admin password: PASSOWORD_HERE           #
#           Please save it before closing this window         #
###############################################################


Manager installation has been finished!

Please copy the initial admin password as it will be required to log in to the manager web interface.

No nginx installation found. Do you want to install it? [y/n]:

If there is no nginx installed on your system, the wizard will ask you if you want to install it and configure it automatically. We recommend typing ‘y’ to proceed.

You may use another web server as a reverse proxy for the manager. In that case, type ‘n’ and do the work manually.

Nginx preconfiguration has been finished!

Configuration of analyser connection finished

Analyser module started

You should see following messages after the wizard finishes configuring manager and analyser modules.

Now it will proceed to worker configuration.

What do you want to (re)configure?
[1] All of them
[2] CPU
[3] Memory
[4] Network Interface Cards
[5] Connection with analyser
[6] Quit

We suggest using option 1 (All of them) to configure everything at once.

Worker name is very important as this is the way it's identified at analyser
Please don't forget what name you've entered as it'll be necessary for analyser configuration
Please enter this worker instance name (max 31 chars ):

Here you have to configure the name of the worker. It will be used to identify the worker at analyser module. You’ll have to use this name later when adding the worker to analyser module via manager web interface.

For this example we’ll use name “DC1-liveshield1” but it can be anything you want.

Do you want to use analyser located on the same server? [y/n]:

If you have installed analyser module on the same machine as worker, type ‘y’ to configure the connection automatically. Otherwise type ‘n’ and provide analyser IP address and port.

Below you'll see your physical system topology. This is required only if you have more than one physical CPU
You should read it carefully and termine which NIC adapter (by PCI address) is connected to which CPU
Also please check memory banks if they are the same on both CPUs. If not - please fix it
This knowledge is necessary for further configuration in order to optimally select CPU cores
Press enter to continue...

Machine (63GB total)
Package L#0
   NUMANode L#0 (P#0 31GB)
   Core L#0
   Core L#1
   Core L#2
   Core L#3
   Core L#4
   Core L#5
   Core L#6
   Core L#7
   Core L#8
   Core L#9
   Core L#10
   Core L#11
   Core L#12
   Core L#13
   HostBridge
      PCIBridge
      PCI 01:00.0 (RAID)
         Block(Disk) "sda"
      PCIBridge
      PCI 04:00.0 (Ethernet)
         Net "enp4s0np0"
      PCI 00:11.4 (IDE)
      PCIBridge
      PCIBridge
         PCIBridge
            PCIBridge
            PCI 0a:00.0 (VGA)
      PCIBridge
      PCI 02:00.0 (Ethernet)
         Net "eno1"
      PCI 02:00.1 (Ethernet)
         Net "eno2"
      PCIBridge
      PCI 03:00.0 (Ethernet)
         Net "eno3"
      PCI 03:00.1 (Ethernet)
         Net "eno4"
      PCI 00:1f.2 (IDE)
      PCI 00:1f.5 (IDE)
      Block(Removable Media Device) "sr0"
Package L#1
   NUMANode L#1 (P#1 31GB)
   Core L#14
   Core L#15
   Core L#16
   Core L#17
   Core L#18
   Core L#19
   Core L#20
   Core L#21
   Core L#22
   Core L#23
   Core L#24
   Core L#25
   Core L#26
   Core L#27
Misc(MemoryModule)
Misc(MemoryModule)
Misc(MemoryModule)
Misc(MemoryModule)
Misc(MemoryModule)
Misc(MemoryModule)
Misc(MemoryModule)
Misc(MemoryModule)
Press enter to continue...

This is an example of system topology displayed by the wizard. It shows physical CPUs, cores, memory nodes and PCI devices. You should analyse it carefully to determine which NICs are connected to which CPU. In this case you can see that dedicated 40G NIC is connected to CPU 0 (Package L#0) via PCI addresses 04:00.0.

PCI 04:00.0 (Ethernet)
   Net "enp4s0np0"

So in this case, cores only from CPU socket 0 should be used for this NIC in order to achieve optimal performance. If you have single CPU system, you can ignore this step.

Below you'll see the list of CPU packages and cores (grouped in threads when HT is enabled)
Please select which cores you would like to reserve for LiveShield worker module
Example: When you have two NICs, each running 4 queues (i.e. 4x10G) each bound to different CPU, you should reserve 4 cores from each CPU + 1 main core from any CPU
Press enter to continue...

======================================================================
Core and Socket Information (as reported by '/sys/devices/system/cpu')
======================================================================

cores =  [0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14]
sockets =  [0, 1]

      Socket 0        Socket 1
      --------        --------
Core 0  [0, 28]         [1, 29]
Core 1  [2, 30]         [3, 31]
Core 2  [4, 32]         [5, 33]
Core 3  [6, 34]         [7, 35]
Core 4  [8, 36]         [9, 37]
Core 5  [10, 38]        [11, 39]
Core 6  [12, 40]        [13, 41]
Core 8  [14, 42]        [15, 43]
Core 9  [16, 44]        [17, 45]
Core 10 [18, 46]        [19, 47]
Core 11 [20, 48]        [21, 49]
Core 12 [22, 50]        [23, 51]
Core 13 [24, 52]        [25, 53]
Core 14 [26, 54]        [27, 55]


Please enter list of cores, separated by comma:

Because our CPUs have Hyper-Threading enabled, each physical core is represented by two logical cores. Because in our example we have single 40G NIC connected to CPU 0, we should select cores 0-13 from socket 0 plus one main core.

How many cores you should select depends on number of NICs and their queues used for packet processing. We stick to the rule of one core per queue. Please take note that too many queues (cores) may lead to performance degradation due to excessive context switching and inneficient CPU cache usage. Our general recommendation is to use 1 queue (core) per 10G of traffic. So in this case we will need 4 cores + 1 main core = 5 cores total.

Because each physical core is represented by two logical cores, you should reserve both logical cores for each physical core you want to use. You can always reserve more CPU cores, even if you don’t need them right now. This may be useful in case of future traffic increase, so you won’t have to reboot machine. All cores reserved for LiveShield worker will be unavailable for other system processes, so make sure you leave enough cores for the OS and other applications, especially if you’re running analyser and manager on the same machine.

So in this example, we’ve decided to use core 0 as a main core and cores 2,4,6,8 for the 40G NIC queues. Because of HT, we have to reserve cores 0,2,4,6,8 and their siblings 28,30,32,34,36.

Please enter list of cores, seperated by comma: 0,2,4,6,8,28,30,32,34,36

Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.12.63+deb13-amd64
Found initrd image: /boot/initrd.img-6.12.63+deb13-amd64
Found linux image: /boot/vmlinuz-6.12.57+deb13-amd64
Found initrd image: /boot/initrd.img-6.12.57+deb13-amd64
Adding boot menu entry for UEFI Firmware Settings ...
done
Kernel startup line updated successfully. Changes will be visible after reboot!

Script edited the “/etc/default/grub” file to reserve selected CPU cores for LiveShield worker module.

Below you'll see actual memory usage and proceed to reserve some for LiveShield worker
Please leave some memory for system and other services. We recommend leaving at least 4GB for other services
If you have multi-processor system, the memory will be reserved on both CPUs
Also your hardware and OS must support desired amount of hugepages (n*1GB)
Please ensure that your RAM modules are the same on each CPU


Below you'll find current memory statistics (total - all nodes)
Press enter to continue...

               total        used        free      shared  buff/cache   available
Mem:             67G         10G         56G        6.5M        1.5G         57G
Swap:             0B          0B          0B


Please enter desired memory reservation (in GB):

Minimum recommended hugepages memory for worker is 4GB. However if you can, please allocate 8GB or more as it will help scaling in the future.

Please make sure you have same memory size on both CPUs if you’re using multi-processor system. Asymmetry may lead to performance degradation or crashes.

Would you like to confirm memory reservation? [y/n]:

Confirm your choice

Below you'll see list of recognized network adapters (NIC) on your system
Press enter to continue...

0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 unused=
0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 unused=
0000:03:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno3 drv=tg3 unused=
0000:03:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno4 drv=tg3 unused=
0000:04:00.0 'Ethernet Controller XL710 for 40GbE QSFP+ 1584' if=enp4s0np0 drv=i40e unused=


Above you can see list of devices running on the system. You have to choose which devices you would like to use with LiveShield
Please note, if you have multiple-port NIC, you'll have to add all ports. Otherwise it'll not be able to work
You have to enter device name in "bus:slot.func" format i.e. "0000:04:00.0"


Please enter NIC port (leave blank and press enter to finish):

Because we are bypassing the kernel network stack, you have to choose which NICs you want to dedicate to LiveShield worker module. Please remember, that once a NIC is assigned to LiveShield worker, it will be unavailable for other system processes until you reconfigure it.

In this example we have single-port 40G Intel NIC, displayed as:

0000:04:00.0 'Ethernet Controller XL710 for 40GbE QSFP+ 1584' if=enp4s0np0 drv=i40e unused=

So, we’ll enter “0000:04:00.0” to assign this NIC to LiveShield worker module. In case of multi-port NICs, please make sure you add all ports you want to use. Otherwise it’ll fail to bind or other issues can occur. You can add multiple NICs if you want so, but please remember original order of NICs on that list (based on PCI addresses). This will be important when configuring analyser module later.

Please enter NIC port (leave blank and press enter to finish): 0000:04:00.0
Please enter NIC port (leave blank and press enter to finish):


Below you can find list of devices entered by you:
0000:04:00.0

We will now change the driver of those devices in order to allow direct PCI access from LiveShield
WARNING! Please note that those devices will disappear from the system. You'll no longer see them in network adapter list
Any network connections performed by those interfaces will be closed!

Would you like to confirm driver change? [y/n]: y
vfio-pci kernel module loaded
Binding 0000:04:00.0 to vfio-pci driver


Now we will list again all network devices in the system. Please look at the "DPDK compatible driver" section
You should see all your selected devices right there. Please remember that order of devices is crucial in further configuration
We recommend to save the order of the devices and their pci addresses


Press enter to continue...


Network devices using DPDK-compatible driver
============================================
0000:04:00.0 'Ethernet Controller XL710 for 40GbE QSFP+ 1584' drv=vfio-pci unused=i40e

Network devices using kernel driver
===================================
0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 unused=vfio-pci
0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 unused=vfio-pci
0000:03:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno3 drv=tg3 unused=vfio-pci
0000:03:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno4 drv=tg3 unused=vfio-pci


Configuration of NICs is finished. If the list shows something incorrect, please restart installer

Here is an example of NIC binding configuration. We bound our 40G Intel NIC to vfio-pci driver so LiveShield worker can access it directly. As you can see it’s now listed under “Network devices using DPDK-compatible driver” section. Now, this NIC is no longer available for the system.

LiveShield installation process is now finished. You have to reboot the system to apply all changes.

After reboot, please proceed to Base configuration to finalize the setup in the Web GUI.