Preparing Virtual Machines
Create Ubuntu VMs with vagrant
Add ubuntu bionic 64 bit from vagrant public boxes:
1
vagrant box add ubuntu/bionic64
1
==> box: Loading metadata for box 'ubuntu/bionic64'
2
box: URL: https://vagrantcloud.com/ubuntu/bionic64
3
==> box: Adding box 'ubuntu/bionic64' (v20200225.0.0) for provider: virtualbox
4
box: Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20\
5
200225.0.0/providers/virtualbox.box
6
7
box: Download redirected to host: cloud-images.ubuntu.com
8
box:
9
==> box: Successfully added box 'ubuntu/bionic64' (v20200225.0.0) for 'virtualbo\
10
x'!
Create the project folder and initialize Vagrantfile:
1
mkdir -p ~/Code/simple-scalable-ha &&
2
cd ~/Code/simple-scalable-ha &&
3
vagrant init
1
A `Vagrantfile` has been placed in this directory. You are now
2
ready to `vagrant up` your first virtual environment! Please read
3
the comments in the Vagrantfile as well as documentation on
4
`vagrantup.com` for more information on using Vagrant.
Edit the Vagrantfile
for windows:notepad .\Vagrantfile
or for linux:nano Vagrantfile
and replace with the following contents:
1
Vagrant
.
configure
(
"2"
)
do
|
config
|
2
config
.
vm
.
define
"loadbalancer"
do
|
loadbalancer
|
3
loadbalancer
.
vm
.
box
=
"ubuntu/bionic64"
4
loadbalancer
.
vm
.
hostname
=
"loadbalancer"
5
loadbalancer
.
vm
.
box_check_update
=
false
6
loadbalancer
.
vm
.
network
"private_network"
,
ip
:
"192.168.50.100"
7
loadbalancer
.
vm
.
provider
"virtualbox"
do
|
vb
|
8
vb
.
memory
=
"512"
9
vb
.
cpus
=
1
10
end
11
end
12
config
.
vm
.
define
"backend1"
do
|
backend1
|
13
backend1
.
vm
.
box
=
"ubuntu/bionic64"
14
backend1
.
vm
.
hostname
=
"backend1"
15
backend1
.
vm
.
box_check_update
=
false
16
backend1
.
vm
.
network
"private_network"
,
ip
:
"192.168.50.110"
17
backend1
.
vm
.
provider
"virtualbox"
do
|
vb
|
18
vb
.
memory
=
"512"
19
vb
.
cpus
=
1
20
end
21
end
22
config
.
vm
.
define
"backend2"
do
|
backend2
|
23
backend2
.
vm
.
box
=
"ubuntu/bionic64"
24
backend2
.
vm
.
hostname
=
"backend2"
25
backend2
.
vm
.
box_check_update
=
false
26
backend2
.
vm
.
network
"private_network"
,
ip
:
"192.168.50.120"
27
backend2
.
vm
.
provider
"virtualbox"
do
|
vb
|
28
vb
.
memory
=
"512"
29
vb
.
cpus
=
1
30
end
31
end
32
config
.
vm
.
define
"backend3"
do
|
backend3
|
33
backend3
.
vm
.
box
=
"ubuntu/bionic64"
34
backend3
.
vm
.
hostname
=
"backend3"
35
backend3
.
vm
.
box_check_update
=
false
36
backend3
.
vm
.
network
"private_network"
,
ip
:
"192.168.50.130"
37
backend3
.
vm
.
provider
"virtualbox"
do
|
vb
|
38
vb
.
memory
=
"512"
39
vb
.
cpus
=
1
40
end
41
end
42
config
.
vm
.
define
"staticfile"
do
|
staticfile
|
43
staticfile
.
vm
.
box
=
"ubuntu/bionic64"
44
staticfile
.
vm
.
hostname
=
"staticfile"
45
staticfile
.
vm
.
box_check_update
=
false
46
staticfile
.
vm
.
network
"private_network"
,
ip
:
"192.168.50.140"
47
staticfile
.
vm
.
provider
"virtualbox"
do
|
vb
|
48
vb
.
memory
=
"512"
49
vb
.
cpus
=
1
50
end
51
end
52
config
.
vm
.
define
"redis"
do
|
redis
|
53
redis
.
vm
.
box
=
"ubuntu/bionic64"
54
redis
.
vm
.
hostname
=
"redis"
55
redis
.
vm
.
box_check_update
=
false
56
redis
.
vm
.
network
"private_network"
,
ip
:
"192.168.50.150"
57
redis
.
vm
.
provider
"virtualbox"
do
|
vb
|
58
vb
.
memory
=
"512"
59
vb
.
cpus
=
1
60
end
61
end
62
config
.
vm
.
define
"database"
do
|
database
|
63
database
.
vm
.
box
=
"ubuntu/bionic64"
64
database
.
vm
.
hostname
=
"database"
65
database
.
vm
.
box_check_update
=
false
66
database
.
vm
.
network
"private_network"
,
ip
:
"192.168.50.160"
67
database
.
vm
.
provider
"virtualbox"
do
|
vb
|
68
vb
.
memory
=
"512"
69
vb
.
cpus
=
1
70
end
71
end
72
end
Bring up all the configured VMs with this command:
1
Bringing machine 'loadbalancer' up with 'virtualbox' provider...
2
Bringing machine 'backend1' up with 'virtualbox' provider...
3
Bringing machine 'backend2' up with 'virtualbox' provider...
4
Bringing machine 'backend3' up with 'virtualbox' provider...
5
Bringing machine 'staticfile' up with 'virtualbox' provider...
6
Bringing machine 'redis' up with 'virtualbox' provider...
7
Bringing machine 'database' up with 'virtualbox' provider...
8
==> loadbalancer: Importing base box 'ubuntu/bionic64'...
9
==> loadbalancer: Matching MAC address for NAT networking...
10
==> loadbalancer: Setting the name of the VM: simple-scalable-ha_loadbalancer_158321\
11
4317709_93178
12
==> loadbalancer: Clearing any previously set network interfaces...
13
==> loadbalancer: Preparing network interfaces based on configuration...
14
loadbalancer: Adapter 1: nat
15
loadbalancer: Adapter 2: hostonly
16
==> loadbalancer: Forwarding ports...
17
loadbalancer: 22 (guest) => 2222 (host) (adapter 1)
18
==> loadbalancer: Running 'pre-boot' VM customizations...
19
==> loadbalancer: Booting VM...
20
==> loadbalancer: Waiting for machine to boot. This may take a few minutes...
21
loadbalancer: SSH address: 127.0.0.1:2222
22
loadbalancer: SSH username: vagrant
23
loadbalancer: SSH auth method: private key
24
loadbalancer: Warning: Connection reset. Retrying...
25
loadbalancer: Warning: Remote connection disconnect. Retrying...
26
loadbalancer:
27
loadbalancer: Vagrant insecure key detected. Vagrant will automatically replace
28
loadbalancer: this with a newly generated keypair for better security.
29
loadbalancer:
30
loadbalancer: Inserting generated public key within guest...
31
loadbalancer: Removing insecure key from the guest if it's present...
32
loadbalancer: Key inserted! Disconnecting and reconnecting using new SSH key...
33
==> loadbalancer: Machine booted and ready!
34
...
35
a few lines omitted
36
...
37
==> backend1: Importing base box 'ubuntu/bionic64'...
38
==> backend1: Matching MAC address for NAT networking...
39
==> backend1: Setting the name of the VM: simple-scalable-ha_backend1_1583214374270_\
40
94466
41
==> backend1: Fixed port collision for 22 => 2222. Now on port 2200.
42
==> backend1: Clearing any previously set network interfaces...
43
==> backend1: Preparing network interfaces based on configuration...
44
backend1: Adapter 1: nat
45
backend1: Adapter 2: hostonly
46
==> backend1: Forwarding ports...
47
backend1: 22 (guest) => 2200 (host) (adapter 1)
48
==> backend1: Running 'pre-boot' VM customizations...
49
==> backend1: Booting VM...
50
==> backend1: Waiting for machine to boot. This may take a few minutes...
51
backend1: SSH address: 127.0.0.1:2200
52
backend1: SSH username: vagrant
53
backend1: SSH auth method: private key
54
backend1: Warning: Connection reset. Retrying...
55
backend1: Warning: Remote connection disconnect. Retrying...
56
backend1:
57
backend1: Vagrant insecure key detected. Vagrant will automatically replace
58
backend1: this with a newly generated keypair for better security.
59
backend1:
60
backend1: Inserting generated public key within guest...
61
backend1: Removing insecure key from the guest if it's present...
62
backend1: Key inserted! Disconnecting and reconnecting using new SSH key...
63
==> backend1: Machine booted and ready!
64
...
65
a few lines omitted
66
...
67
==> backend2: Importing base box 'ubuntu/bionic64'...
68
==> backend2: Matching MAC address for NAT networking...
69
==> backend2: Setting the name of the VM: simple-scalable-ha_backend2_1583214431132_\
70
61478
71
==> backend2: Fixed port collision for 22 => 2222. Now on port 2201.
72
==> backend2: Clearing any previously set network interfaces...
73
==> backend2: Preparing network interfaces based on configuration...
74
backend2: Adapter 1: nat
75
backend2: Adapter 2: hostonly
76
==> backend2: Forwarding ports...
77
backend2: 22 (guest) => 2201 (host) (adapter 1)
78
==> backend2: Running 'pre-boot' VM customizations...
79
==> backend2: Booting VM...
80
==> backend2: Waiting for machine to boot. This may take a few minutes...
81
backend2: SSH address: 127.0.0.1:2201
82
backend2: SSH username: vagrant
83
backend2: SSH auth method: private key
84
backend2: Warning: Connection reset. Retrying...
85
backend2: Warning: Remote connection disconnect. Retrying...
86
backend2:
87
backend2: Vagrant insecure key detected. Vagrant will automatically replace
88
backend2: this with a newly generated keypair for better security.
89
backend2:
90
backend2: Inserting generated public key within guest...
91
backend2: Removing insecure key from the guest if it's present...
92
backend2: Key inserted! Disconnecting and reconnecting using new SSH key...
93
==> backend2: Machine booted and ready!
94
...
95
a few lines omitted
96
...
97
==> backend3: Importing base box 'ubuntu/bionic64'...
98
==> backend3: Matching MAC address for NAT networking...
99
==> backend3: Setting the name of the VM: simple-scalable-ha_backend3_1583214503275_\
100
11509
101
==> backend3: Fixed port collision for 22 => 2222. Now on port 2202.
102
==> backend3: Clearing any previously set network interfaces...
103
==> backend3: Preparing network interfaces based on configuration...
104
backend3: Adapter 1: nat
105
backend3: Adapter 2: hostonly
106
==> backend3: Forwarding ports...
107
backend3: 22 (guest) => 2202 (host) (adapter 1)
108
==> backend3: Running 'pre-boot' VM customizations...
109
==> backend3: Booting VM...
110
==> backend3: Waiting for machine to boot. This may take a few minutes...
111
backend3: SSH address: 127.0.0.1:2202
112
backend3: SSH username: vagrant
113
backend3: SSH auth method: private key
114
backend3: Warning: Connection reset. Retrying...
115
backend3: Warning: Remote connection disconnect. Retrying...
116
backend3:
117
backend3: Vagrant insecure key detected. Vagrant will automatically replace
118
backend3: this with a newly generated keypair for better security.
119
backend3:
120
backend3: Inserting generated public key within guest...
121
backend3: Removing insecure key from the guest if it's present...
122
backend3: Key inserted! Disconnecting and reconnecting using new SSH key...
123
==> backend3: Machine booted and ready!
124
...
125
a few lines omitted
126
...
127
==> staticfile: Importing base box 'ubuntu/bionic64'...
128
==> staticfile: Matching MAC address for NAT networking...
129
==> staticfile: Setting the name of the VM: simple-scalable-ha_staticfile_1583214566\
130
435_28005
131
==> staticfile: Fixed port collision for 22 => 2222. Now on port 2203.
132
==> staticfile: Clearing any previously set network interfaces...
133
==> staticfile: Preparing network interfaces based on configuration...
134
staticfile: Adapter 1: nat
135
staticfile: Adapter 2: hostonly
136
==> staticfile: Forwarding ports...
137
staticfile: 22 (guest) => 2203 (host) (adapter 1)
138
==> staticfile: Running 'pre-boot' VM customizations...
139
==> staticfile: Booting VM...
140
==> staticfile: Waiting for machine to boot. This may take a few minutes...
141
staticfile: SSH address: 127.0.0.1:2203
142
staticfile: SSH username: vagrant
143
staticfile: SSH auth method: private key
144
staticfile: Warning: Connection reset. Retrying...
145
staticfile: Warning: Remote connection disconnect. Retrying...
146
staticfile:
147
staticfile: Vagrant insecure key detected. Vagrant will automatically replace
148
staticfile: this with a newly generated keypair for better security.
149
staticfile:
150
staticfile: Inserting generated public key within guest...
151
staticfile: Removing insecure key from the guest if it's present...
152
staticfile: Key inserted! Disconnecting and reconnecting using new SSH key...
153
==> staticfile: Machine booted and ready!
154
...
155
a few lines omitted
156
...
157
==> redis: Importing base box 'ubuntu/bionic64'...
158
==> redis: Matching MAC address for NAT networking...
159
==> redis: Setting the name of the VM: simple-scalable-ha_redis_1583214631974_29046
160
==> redis: Fixed port collision for 22 => 2222. Now on port 2204.
161
==> redis: Clearing any previously set network interfaces...
162
==> redis: Preparing network interfaces based on configuration...
163
redis: Adapter 1: nat
164
redis: Adapter 2: hostonly
165
==> redis: Forwarding ports...
166
redis: 22 (guest) => 2204 (host) (adapter 1)
167
==> redis: Running 'pre-boot' VM customizations...
168
==> redis: Booting VM...
169
==> redis: Waiting for machine to boot. This may take a few minutes...
170
redis: SSH address: 127.0.0.1:2204
171
redis: SSH username: vagrant
172
redis: SSH auth method: private key
173
redis: Warning: Connection reset. Retrying...
174
redis: Warning: Remote connection disconnect. Retrying...
175
redis:
176
redis: Vagrant insecure key detected. Vagrant will automatically replace
177
redis: this with a newly generated keypair for better security.
178
redis:
179
redis: Inserting generated public key within guest...
180
redis: Removing insecure key from the guest if it's present...
181
redis: Key inserted! Disconnecting and reconnecting using new SSH key...
182
==> redis: Machine booted and ready!
183
...
184
a few lines omitted
185
...
186
==> database: Importing base box 'ubuntu/bionic64'...
187
==> database: Matching MAC address for NAT networking...
188
==> database: Setting the name of the VM: simple-scalable-ha_database_1583214686979_\
189
30232
190
==> database: Fixed port collision for 22 => 2222. Now on port 2205.
191
==> database: Clearing any previously set network interfaces...
192
==> database: Preparing network interfaces based on configuration...
193
database: Adapter 1: nat
194
database: Adapter 2: hostonly
195
==> database: Forwarding ports...
196
database: 22 (guest) => 2205 (host) (adapter 1)
197
==> database: Running 'pre-boot' VM customizations...
198
==> database: Booting VM...
199
==> database: Waiting for machine to boot. This may take a few minutes...
200
database: SSH address: 127.0.0.1:2205
201
database: SSH username: vagrant
202
database: SSH auth method: private key
203
database: Warning: Connection reset. Retrying...
204
database: Warning: Remote connection disconnect. Retrying...
205
database:
206
database: Vagrant insecure key detected. Vagrant will automatically replace
207
database: this with a newly generated keypair for better security.
208
database:
209
database: Inserting generated public key within guest...
210
database: Removing insecure key from the guest if it's present...
211
database: Key inserted! Disconnecting and reconnecting using new SSH key...
212
==> database: Machine booted and ready!
Verify created VM’s with virtualbox command VBoxManage
:
Output:
1
"simple-scalable-ha_loadbalancer_1583214317709_93178" {c3497f59-f1a3-4d39-a4d4-51068\
2
6aa19a4}
3
"simple-scalable-ha_backend1_1583214374270_94466" {6d678039-5e8c-47e7-b657-bdde491ed\
4
937}
5
"simple-scalable-ha_backend2_1583214431132_61478" {5af32cde-d6eb-4fd5-ac96-3bad06c9f\
6
1d3}
7
"simple-scalable-ha_backend3_1583214503275_11509" {96b8537e-a689-4096-8d12-01812b25c\
8
1b7}
9
"simple-scalable-ha_staticfile_1583214566435_28005" {e2e61ae5-68fc-4a5b-94ec-22ff35c\
10
aaf21}
11
"simple-scalable-ha_redis_1583214631974_29046" {839953db-029c-4bc6-8a5c-2ba73ed240c2}
12
"simple-scalable-ha_database_1583214686979_30232" {3ff8b725-5049-44bd-943f-cbcf96656\
13
4e1}
Please note the backend1
, backend2
and backend3
VMs UUID that will be used later:
backend1: 6d678039-5e8c-47e7-b657-bdde491ed937
backend2: 5af32cde-d6eb-4fd5-ac96-3bad06c9f1d3
backend3: 96b8537e-a689-4096-8d12-01812b25c1b7
Create and attach additional hard drive
Specifically for backend1
and backend2
, we need to create additional separate hard drive (called medium) attached to each VM. We can use virtual box commands to create additional vdi
file:
1
VBoxManage createmedium disk --filename backend1-data.vdi --size 10000
1
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
2
Medium created. UUID: b52eba69-9ca0-415b-9347-28626615d5a8
1
VBoxManage createmedium disk --filename backend2-data.vdi --size 10000
1
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
2
Medium created. UUID: 91c5ff88-f1ae-478d-86eb-c3d367b02b0a
1
VBoxManage createmedium disk --filename backend3-data.vdi --size 10000
1
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
2
Medium created. UUID: e503f8b3-b6e6-4497-99e6-7ed9b6c7dc21
Please note the backend1-data.vdi
, backend2-data.vdi
, and backend3-data.vdi
UUID that will be used later:
backend1: b52eba69-9ca0-415b-9347-28626615d5a8
backend2: 91c5ff88-f1ae-478d-86eb-c3d367b02b0a
backend3: e503f8b3-b6e6-4497-99e6-7ed9b6c7dc21
To verify created medium for virtual box use the command by using UUID parameter taken from previous log output:
1
VBoxManage showmediuminfo disk b52eba69-9ca0-415b-9347-28626615d5a8
1
UUID: b52eba69-9ca0-415b-9347-28626615d5a8
2
Parent UUID: base
3
State: created
4
Type: normal (base)
5
Location: /home/webuntu/Code/simple-scalable-ha/backend1-data.vdi
6
Storage format: VDI
7
Format variant: dynamic default
8
Capacity: 10000 MBytes
9
Size on disk: 2 MBytes
10
Encryption: disabled
1
VBoxManage showmediuminfo disk 91c5ff88-f1ae-478d-86eb-c3d367b02b0a
1
UUID: 91c5ff88-f1ae-478d-86eb-c3d367b02b0a
2
Parent UUID: base
3
State: created
4
Type: normal (base)
5
Location: /home/webuntu/Code/simple-scalable-ha/backend2-data.vdi
6
Storage format: VDI
7
Format variant: dynamic default
8
Capacity: 10000 MBytes
9
Size on disk: 2 MBytes
10
Encryption: disabled
1
VBoxManage showmediuminfo disk e503f8b3-b6e6-4497-99e6-7ed9b6c7dc21
1
UUID
:
e503f8b3
-
b6e6
-
4497
-
99
e6
-
7
ed9b6c7dc21
2
Parent
UUID
:
base
3
State
:
created
4
Type
:
normal
(
base
)
5
Location
:
/home/webuntu/Code/simple-scalable-ha/
backend3
-
data
.
vdi
6
Storage
format
:
VDI
7
Format
variant
:
dynamic
default
8
Capacity
:
10000
MBytes
9
Size
on
disk
:
2
MBytes
10
Encryption
:
disabled
After the medium hard drive successfully created for both VMs, we need to attach the medium to the corresponding VM. But before to do that, we need to suspend the VMs.
1
vagrant halt backend1 \
2
&&
vagrant halt backend2 \
3
&&
vagrant halt backend3
1
==> backend1: Attempting graceful shutdown of VM...
2
==> backend2: Attempting graceful shutdown of VM...
3
==> backend3: Attempting graceful shutdown of VM...
And then we proceed to attach the medium. Please use previous noted VM’s UUID for this command parameter. Attach backend1-data.vdi
to backend1
VM (the first parameter is the VM’s UUID and the second parameter is medium UUID):
1
VBoxManage storageattach 6d678039-5e8c-47e7-b657-bdde491ed937 --storagectl SCSI --po\
2
rt 2
--type hdd --medium b52eba69-9ca0-415b-9347-28626615d5a8
Attach backend2-data.vdi
to backend2
VM:
1
VBoxManage storageattach 5af32cde-d6eb-4fd5-ac96-3bad06c9f1d3 --storagectl SCSI --po\
2
rt 2
--type hdd --medium 91c5ff88-f1ae-478d-86eb-c3d367b02b0a
Attach backend3-data.vdi
to backend3
VM:
1
VBoxManage storageattach 96b8537e-a689-4096-8d12-01812b25c1b7 --storagectl SCSI --po\
2
rt 2 --type hdd --medium e503f8b3-b6e6-4497-99e6-7ed9b6c7dc21
To verify attached medium to each correspond VMs in Windows:
1
VBoxManage showvminfo 5af32cde-d6eb-4fd5-ac96-3bad06c9f1d3 | Select-String -Pattern \
2
SCSI
Or in Linux:
1
VBoxManage showvminfo 5af32cde-d6eb-4fd5-ac96-3bad06c9f1d3 |
grep SCSI
1
Storage Controller Name (1): SCSI
2
SCSI (0, 0): /home/webuntu/VirtualBox VMs/simple-scalable-ha_backend2_1583214431132_\
3
61478/ubuntu-bionic-18.04-cloudimg.vmdk (UUID: 45f9063b-15b9-45e4-8d95-c0f3da81478a)
4
SCSI (1, 0): /home/webuntu/VirtualBox VMs/simple-scalable-ha_backend2_1583214431132_\
5
61478/ubuntu-bionic-18.04-cloudimg-configdrive.vmdk (UUID: 2fc9cc43-1836-4854-89a9-b\
6
e41137eae7f)
7
SCSI (2, 0): /home/webuntu/Code/simple-scalable-ha/backend2-data.vdi (UUID: 91c5ff88\
8
-f1ae-478d-86eb-c3d367b02b0a)
Bring up again all the VMs
Do the verify for backend2
(try to figure it out by yourself) and bring up again all the VMs.
1
vagrant up backend1 \
2
&& vagrant up backend2 \
3
&& vagrant up backend3
And we did setup all the necessary VMs in order to setup scalable and high available web application setup!
Configuring Load Balancer
Initial server setup for load balancer
In order to gain access to load balancer via ssh:
1
vagrant ssh loadbalancer
1
Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-88-generic x86_64)Welcome to Ubuntu \
2
18.04.4 LTS (GNU/Linux 4.15.0-88-generic x86_64)
3
4
* Documentation: https://help.ubuntu.com
5
* Management: https://landscape.canonical.com
6
* Support: https://ubuntu.com/advantage
7
8
System information as of Thu Feb 27 05:28:25 WIB 2020
9
10
System load: 0.03 Processes: 87
11
Usage of /: 10.0% of 9.63GB Users logged in: 0
12
Memory usage: 23% IP address for enp0s3: 10.0.2.15
13
Swap usage: 0% IP address for enp0s8: 192.168.50.100
14
15
16
0 packages can be updated.
17
0 updates are security updates.
To generate locale in English and use UTF-8 use these commands:
1
sudo locale-gen en_US.UTF-8 \
2
&&
export
LANGUAGE
=
en_US.UTF-8 \
3
&&
export
LANG
=
en_US.UTF-8 \
4
&&
export
LC_ALL
=
en_US.UTF-8 \
5
&&
sudo locale-gen en_US.UTF-8 \
6
&&
sudo dpkg-reconfigure locales
If you want to use another locale, for example Indonesia using UTF-8, replace:
1
sudo locale-gen id_ID.UTF-8 \
2
&&
export
LANGUAGE
=
id_ID.UTF-8 \
3
&&
export
LANG
=
id_ID.UTF-8 \
4
&&
export
LC_ALL
=
id_ID.UTF-8 \
5
&&
sudo locale-gen id_ID.UTF-8 \
6
&&
sudo dpkg-reconfigure locales
These commands will show dialog selection and ensure you choose correspond locale from your chosen commands above.
Then we need to configure timezone based on your locale, in this scenario I will be using Asia/Jakarta
:
1
sudo timedatectl set-timezone Asia/Jakarta \
2
&& timedatectl
1
Local time: Tue 2020-03-03 13:25:42 WIB
2
Universal time: Tue 2020-03-03 06:25:42 UTC
3
RTC time: Tue 2020-03-03 06:25:41
4
Time zone: Asia/Jakarta (WIB, +0700)
5
System clock synchronized: no
6
systemd-timesyncd.service active: yes
7
RTC in local TZ: no
Creating self-signed certificate
Generate self-signed ssl:
1
sudo openssl req -x509 -nodes -days 365
-newkey rsa:2048 \
2
-keyout /etc/ssl/private/selfsigned.key \
3
-out /etc/ssl/certs/selfsigned.crt \
4
&&
sudo cat /etc/ssl/certs/selfsigned.crt /etc/ssl/private/selfsigned.key \
5
|
sudo tee /etc/ssl/certs/selfsigned.pem
1
Generating a RSA private key
2
.........+++++
3
....................................................................................\
4
..............................+++++
5
writing new private key to '/etc/ssl/private/selfsigned.key'
6
-----
7
You are about to be asked to enter information that will be incorporated
8
into your certificate request.
9
What you are about to enter is what is called a Distinguished Name or a DN.
10
There are quite a few fields but you can leave some blank
11
For some fields there will be a default value,
12
If you enter '.', the field will be left blank.
13
-----
14
Country Name (2 letter code) [AU]:ID
15
State or Province Name (full name) [Some-State]:Banten
16
Locality Name (eg, city) []:Cilegon
17
Organization Name (eg, company) [Internet Widgits Pty Ltd]:TRS
18
Organizational Unit Name (eg, section) []:TRS
19
Common Name (e.g. server FQDN or YOUR name) []:192.168.50.100
20
Email Address []:admin@192.168.50.100
After successfully creating the self-signed certicate and private key, you will see similar encoded text surrounded by -----BEGIN CERTIFICATE-----
and -----END CERTIFICATE-----
or -----BEGIN PRIVATE KEY-----
and -----END PRIVATE KEY-----
1
-----BEGIN CERTIFICATE-----
2
MIID4zCCAsugAwIBAgIUb4VSEvfCIL3M3sY/uA5xOFbRVsQwDQYJKoZIhvcNAQEL
3
BQAwgYAxCzAJBgNVBAYTAklEMQ8wDQYDVQQIDAZCYW50ZW4xEDAOBgNVBAcMB0Np
4
bGVnb24xDDAKBgNVBAoMA1RSUzEMMAoGA1UECwwDVFJTMRIwEAYDVQQDDAlsb2Nh
5
bGhvc3QxHjAcBgkqhkiG9w0BCQEWD2FkbWluQGxvY2FsaG9zdDAeFw0yMDAzMDMw
6
NjI2NThaFw0yMTAzMDMwNjI2NThaMIGAMQswCQYDVQQGEwJJRDEPMA0GA1UECAwG
7
QmFudGVuMRAwDgYDVQQHDAdDaWxlZ29uMQwwCgYDVQQKDANUUlMxDDAKBgNVBAsM
8
A1RSUzESMBAGA1UEAwwJbG9jYWxob3N0MR4wHAYJKoZIhvcNAQkBFg9hZG1pbkBs
9
b2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDvepTN2tbO
10
Q+/QVoJQV5D2ZpgdZAKB8DBNfYBfFITsdwZLd0+z2Ed+FMTDFJc2mLvw8U2qw6aO
11
IUMe5pRZWCZ+WKB31OzM4JyM4yeUdqknptAdDHqJvbvNyvx5a5PdC7spODhdrfDx
12
ePeuf2dtToi76T4XF+2Lq10KpsrEoARClqSu+OYH23BxlOOu9itBpuSQn2sq1hjI
13
/Qap3UJmXeGbIseh+Q1WONEOyYyqqus7f4uoyLWSoP4f4G6Xg8M8qmN0tV+d8Se+
14
1ddhmmq+MrbyMM7sMMvJqwscwTy3+aFoKFqnHV84SrRqkCDT8wIN+9tjfVYNctem
15
/Uc3MChB1ys1AgMBAAGjUzBRMB0GA1UdDgQWBBQIJyS8grusLJHAWJA36E63HzpD
16
3zAfBgNVHSMEGDAWgBQIJyS8grusLJHAWJA36E63HzpD3zAPBgNVHRMBAf8EBTAD
17
AQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAiVVgmqC4q1jfdWqToigUhmcmxrAYJ+Ya6
18
d8HwHWnR27F2WZ9TX7haabz1oCNuuF9TOXty9gstLahuTBleQQfuL1dTlWisHT6M
19
Rew5oARuTt5GpxL6tf52GV80dBCkxL3dmbAH6JiFDh+hToyDSyD6K5FygfyKlsUY
20
p05Z9Comu05/Tx22/k46rh4XGivWFgZFe8K1KOJEToF81Xb6ba4F0sc1x0iW6Wr/
21
TlS5Av6wubXQNottljVM9JSLO+ezidxxKpgRRwbXkw+96PZYnxhei7sGXFs5K7Kn
22
uKwDzYg6XUSsMQwp9rtvmXKZ1enHlzrj2QZDb81Z805Dt5e+WZ8l
23
-----END CERTIFICATE-----
24
-----BEGIN PRIVATE KEY-----
25
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDvepTN2tbOQ+/Q
26
VoJQV5D2ZpgdZAKB8DBNfYBfFITsdwZLd0+z2Ed+FMTDFJc2mLvw8U2qw6aOIUMe
27
5pRZWCZ+WKB31OzM4JyM4yeUdqknptAdDHqJvbvNyvx5a5PdC7spODhdrfDxePeu
28
f2dtToi76T4XF+2Lq10KpsrEoARClqSu+OYH23BxlOOu9itBpuSQn2sq1hjI/Qap
29
3UJmXeGbIseh+Q1WONEOyYyqqus7f4uoyLWSoP4f4G6Xg8M8qmN0tV+d8Se+1ddh
30
mmq+MrbyMM7sMMvJqwscwTy3+aFoKFqnHV84SrRqkCDT8wIN+9tjfVYNctem/Uc3
31
MChB1ys1AgMBAAECggEBAObfmMH1Lo3gtTx3il6GbTz/n7sGBdzbgNXUHoSLtbJ6
32
9VigB/jsk9Abma4xFa8PBHG/UQ9GXDY+HwWVaoPQFhxMuTeN0VWbXZH+FNRuqZmO
33
mqjGtQRCJOK7xTgR3JWIj8Gnb7/wx48k/jP+o+mfgvcWYEPHT74NUT/JmUaCtliy
34
IVhgnatJ6Jj+7OCZErcdBrosTXyDwVxwP2adujKevupFAk1ArabkHBt9kOPQzyII
35
8j4pyOdmdsEb5AbtS9RFGmBawkXeJAnqKAraOpDDK/MXZysLjN/i+v4mjuPlLU9m
36
O2Vn7sMxhsWN4TQzLUHoGzmIzq3NzOShPjypgp9op6kCgYEA+YW0JeGGbTzW34xs
37
+NPD6axLGe0BVni8ftguqPu8lU7Mt/qfWK0shN5kjYUJf6hOemFBrj5ME7HTKi/m
38
JYCkovjZaCn6kkWcJ8/IdhY52e84M5HFfbaCdSd1yQ0uG8PtqeUdheuV1+DbALNL
39
tOit5sQ5dlmjYz13A70XxknfYPMCgYEA9bIhReATaA0kW1CpYx1g8VEVUD3RZYTH
40
8SEufqQSjjNQCbJPgft6gqFmchcw8mYYT1kpJPetkXZlWTQMGvXzfe5uWunouBNY
41
rqI0V/H5myDkYytXs+gYw5fKFxs3MouORHq09ohjXS2CsaKPdGlHpXnZzifaAsg6
42
+lWfQNyfDTcCgYEAij6vvsUiy2cHzbdpsLrzMoYI3gZX1WbzWHvB7lH6++Y6ujwb
43
CPB5V+w3Xck1qArB4D/+OeG+GLNPQXJkWqbZkIm7OmD3uQ7kI5KViAdsafiF9Nxw
44
xOPXh70jHw80WqHFDXopT0dlL8Qe0laEPWkk4FQbWhzzz0oApIuhnnTTVE0CgYBH
45
SVq8EmqvCvkcgYfUGScSfUsoz/bcdK0qek0qM2Kq3ZqAZbsJ5LRECJ5XxgDOo+6z
46
vxPgBPjYNrjrK93DSM9QH4jnWezK09osOSXkynF4oA+D8oRsz4+32GerUpsuUC2E
47
EvJEgYgK9fRwo9Dpns5J5dPbK9bmbcAm+AbP4/NF+wKBgBR59Mv8xydjUEu8N4vh
48
v/5vmf+1pJfqL0BjTaJrX/Jq2dNGd+8Jvhj0z76evuOii56ZOCvBd4iMgLXaAoN+
49
h7ABHWKQgXzCTIhn/6/uS7SxhCPw+9yJ0kS4g4O7pilALrw/ul5gft043P1GIWQ0
50
nUnsQ74UkYH/eLu+mmXA7qtm
51
-----END PRIVATE KEY-----
To install HAProxy:
1
sudo apt update \
2
&&
sudo apt install haproxy -y \
3
&&
sudo nano /etc/haproxy/haproxy.cfg \
4
&&
haproxy -c -V -f /etc/haproxy/haproxy.cfg
Add this configuration to the bottom of the line of HAProxy configuration:
1
frontend Local_Server
2
bind :80
3
bind *:443 ssl crt /etc/ssl/certs/selfsigned.pem
4
redirect scheme https if !{ ssl_fc }
5
mode http
6
default_backend My_Web_Servers
7
8
backend My_Web_Servers
9
mode http
10
balance roundrobin
11
option forwardfor
12
http-request set-header X-Forwarded-Port %[dst_port]
13
http-request add-header X-Forwarded-Proto https if { ssl_fc }
14
option httpchk HEAD / HTTP/1.1rnHost:localhost
15
server backend1 192.168.50.110:80
16
server backend2 192.168.50.120:80
17
server backend3 192.168.50.130:80
1
Reading package lists... Done
2
Building dependency tree
3
Reading state information... Done
4
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
5
Reading package lists... Done
6
Building dependency tree
7
Reading state information... Done
8
The following additional packages will be installed:
9
liblua5.3-0
10
Suggested packages:
11
vim-haproxy haproxy-doc
12
The following NEW packages will be installed:
13
haproxy liblua5.3-0
14
0 upgraded, 2 newly installed, 0 to remove and 3 not upgraded.
15
Need to get 1,231 kB of archives.
16
After this operation, 2,842 kB of additional disk space will be used.
17
18
[WARNING] 062/133344 (3709) : Setting tune.ssl.default-dh-param to 1024 by default, \
19
if your workload permits it you should set it to at least 2048. Please set a value >\
20
= 1024 to make this warning disappear.
21
Configuration file is valid
Restart and view HAProxy status:
1
sudo systemctl restart haproxy \
2
&&
sudo systemctl status haproxy
1
●
haproxy
.
service
-
HAProxy
Load
Balancer
2
Loaded
:
loaded
(/
lib
/
systemd
/
system
/
haproxy
.
service
;
enabled
;
vendor
preset
:
enab
\
3
led
)
4
Active
:
active
(
running
)
since
Mon
2020-03-02
02
:
30
:
31
UTC
;
17ms
ago
5
Docs
:
man
:
haproxy
(
1
)
6
file
:/
usr
/
share
/
doc
/
haproxy
/
configuration
.
txt
.
gz
7
Process
:
4466
ExecStartPre
=/
usr
/
sbin
/
haproxy
-f
$
CONFIG
-c
-q
$
EXTRAOPTS
(
code
=
exi
\
8
ted
,
status
=
0
/
SU
9
Main
PID
:
4476
(
haproxy
)
10
Tasks
:
2
(
limit
:
547
)
11
CGroup
:
/
system
.
slice
/
haproxy
.
service
12
├─
4476
/
usr
/
sbin
/
haproxy
-Ws
-f
/
etc
/
haproxy
/
haproxy
.
cfg
-p
/
run
/
haproxy
.
\
13
pid
14
└─
4477
/
usr
/
sbin
/
haproxy
-Ws
-f
/
etc
/
haproxy
/
haproxy
.
cfg
-p
/
run
/
haproxy
.
\
15
pid
16
●
haproxy
.
service
-
HAProxy
Load
Balancer
17
Loaded
:
loaded
(/
lib
/
systemd
/
system
/
haproxy
.
service
;
enabled
;
vendor
preset
:
enab
\
18
led
)
19
Active
:
active
(
running
)
since
Tue
2020-03-03
13
:
40
:
08
WIB
;
27ms
ago
20
Docs
:
man
:
haproxy
(
1
)
21
file
:/
usr
/
share
/
doc
/
haproxy
/
configuration
.
txt
.
gz
22
Process
:
3717
ExecStartPre
=/
usr
/
sbin
/
haproxy
-f
$
CONFIG
-c
-q
$
EXTRAOPTS
(
code
=
exi
\
23
ted
,
status
=
0
/
SU
24
Main
PID
:
3728
(
haproxy
)
25
Tasks
:
2
(
limit
:
547
)
26
CGroup
:
/
system
.
slice
/
haproxy
.
service
27
├─
3728
/
usr
/
sbin
/
haproxy
-Ws
-f
/
etc
/
haproxy
/
haproxy
.
cfg
-p
/
run
/
haproxy
.
\
28
pid
29
└─
3729
/
usr
/
sbin
/
haproxy
-Ws
-f
/
etc
/
haproxy
/
haproxy
.
cfg
-p
/
run
/
haproxy
.
\
30
pid
31
32
Mar
03
13
:
40
:
07
loadbalancer
systemd
[
1
]
:
Starting
HAProxy
Load
Balancer
...
33
Mar
03
13
:
40
:
08
loadbalancer
haproxy
[
3728
]
:
[
WARNING
]
062
/
134008
(
3728
)
:
Setting
tu
\
34
ne
.
ssl
.
default-d
35
Mar
03
13
:
40
:
08
loadbalancer
haproxy
[
3728
]
:
Proxy
Local_Server
started
.
36
Mar
03
13
:
40
:
08
loadbalancer
haproxy
[
3728
]
:
Proxy
Local_Server
started
.
37
Mar
03
13
:
40
:
08
loadbalancer
haproxy
[
3728
]
:
Proxy
My_Web_Servers
started
.
38
Mar
03
13
:
40
:
08
loadbalancer
haproxy
[
3728
]
:
Proxy
My_Web_Servers
started
.
39
Mar
03
13
:
40
:
08
loadbalancer
systemd
[
1
]
:
Started
HAProxy
Load
Balancer
.
Configure firewall:
1
sudo ufw default deny incoming \
2
&& sudo ufw default allow outgoing \
3
&& sudo ufw allow ssh \
4
&& sudo ufw allow http \
5
&& sudo ufw allow https \
6
&& sudo ufw --force enable \
7
&& sudo ufw status
1
Default incoming policy changed to 'deny'
2
(be sure to update your rules accordingly)
3
Default outgoing policy changed to 'allow'
4
(be sure to update your rules accordingly)
5
Rules updated
6
Rules updated (v6)
7
Rules updated
8
Rules updated (v6)
9
Rules updated
10
Rules updated (v6)
11
Firewall is active and enabled on system startup
12
Status: active
13
14
To Action From
15
-- ------ ----
16
22/tcp ALLOW Anywhere
17
80/tcp ALLOW Anywhere
18
443/tcp ALLOW Anywhere
19
22/tcp (v6) ALLOW Anywhere (v6)
20
80/tcp (v6) ALLOW Anywhere (v6)
21
443/tcp (v6) ALLOW Anywhere (v6)
Exit the load balancer ssh session: