MonetDB

0560 Monetdbd for Monetdb

Monetdbd is a MonetDB management daemon. We use it to:
– Initialize a directory as a DBfarm. A DBfarm is a folder that can contain several MonetDB databases.
– Monetdbd is used to set and read farm-wide daemon options that are shared between databases.
– Monetdbd is used to start/stop the actual servers (Mserver5 process).
– Users usually connect to the monetdbd daemon first, and then this daemon will redirect them to the selected database.

Monetdbd Informations

monetdbd -v
monetdbd --version
                                       
This is how we can find version of the monetdbd daemon. We can use short or long command syntax.

This is how we can get  help for monetdbd.  
monetdbd -h
monetdbd --help
We can provide the name of a command after the "-h" option. That will give us help for that command.

Monetdbd and DBfarm

cd /home/fffovde/Desktop
mkdir DBfarm
monetdbd create DBfarm

                                                     
We will create a folder with the name "DBfarm". With the command "monetdbd create", we will initialize this folder. Inside of it, a new file ".merovingian_properties" will appear. Now it is possible to create new databases inside of this folder.

This is the inside of the ".merovingian_properties" file.

Currently no one is listening to default MonetDB port 50.000. I will make DBfarm directory functional.  

monetdbd start DBfarm
Now our daemon is listening to port 50000, and DBfarm is functional.
ss -tlnp | grep 50000

After the first start of the DBfarm, three new files will appear in the DBfarm directory. ".merovingian_lock" file is empty."merovingian.pid" file has process ID for monetdbd process. We can confirm that with "pgrep monetdbd".                                                  

"merovingian.log" file is a file with a log.
We can stop DBfarm with monetdbd.  
monetdbd stop DBfarm
After closing the farm, file "merovingian.pid" will disappear. This file exists only when DBfarm is functional. Now, again, no one is listening to port 50.000. ss -tlnp | grep 50000

Getting and Setting DBfarm Settings

We can read setttings of the DBFarm.  

monetdbd get all DBfarm  
hostname              FffOvdeKomp
dbfarm                DBfarm
status                no monetdbd is serving this dbfarm
mserver               unknown (monetdbd not running)
logfile               DBfarm/merovingian.log
pidfile               DBfarm/Merovingian.pid
loglevel              information
sockdir               /tmp
listenaddr            localhost
port                  50000
exittimeout           60
forward               proxy
discovery             true
discoveryttl          600
control               no
passphrase            <unset>
snapshotdir           <unset>
snapshotcompression   .tar.lz4
keepalive             60
mapisock              /tmp/.s.monetdb.50000
controlsock           /tmp/.s.merovingian.50000

We can easily change any of these settings. 
monetdbd set loglevel=debug DBfarm
                                                          
We can also change the settings while DBfarm is active, but some changes will become valid only after Monetdbd is restarted.

After we first change some property, a list of the properties and their values will appear in the ".merovingian_properties" file.

Not all of the properties that we can observer with the "get all" command will be listed here. Properties that do not appear here can not be changed with the "set" command. Those properties are informative and read only.

We should never make changes in this file manually. We should always use the "set" command.

Monetdbd Settings

We will take a look at settings that we can set with Monetdbd.

 File and Log Settings

We know that in DBfarm folder we have files "merovingian.log" and "merovingian.pid". I will choose to use two other files instead of them.    I will change the settings for "log" and "pid" files:
monetdbd set logfile=/home/fffovde/Desktop/logy.log DBfarm
monetdbd set pidfile=/home/fffovde/Desktop/pidy.pid DBfarm

I will start daemon again:
monetdbd start DBfarm

Now, we have two new files on the desktop.Inside of the "log" file, we have a log for monetdbd. Lines labeled with the MSG, are for information and warnings.

We can decide what will be logged in the log file. We can choose between modes "error, warning, information and debug".

error = only errorswarning = errors and warningsinformation = errors, warnings and informationdebug = the same as information

Error lines will be marked with ERR.Default is "information". This is how we change the mode:# monetdbd set loglevel=error DBfarm

Connection Settings

Settings "sockdir" and "forward" should almost never be changed.

Listendaddr is "localhost" by default. That means that our server is not available from the network.If I set listenaddr to "0.0.0.0", our server will become available from the network. I will also change default port number to 50001.

monetdbd set listenaddr=0.0.0.0 DBfarm
monetdbd set port=50001 DBfarm
This setting will become functional only after we restart monetdbd.
monetdbd stop DBfarm
monetdbd start DBfarm

Inside of the DBfarm, I will create and start one database:
monetdb create DBfarmDB
monetdb release DBfarmDB 

monetdb start DBfarmDB  
I will now open the green server. I explained creation of this server in this blog post "link". Then I will try to connect to DBfarmDB from the green server.
mclient -h 192.168.100.124 -p 50001 -u monetdb -d DBfarmDB

We can use any of these modes for "listenaddr". It is also possible to use "hostname", in that case Linux will resolve what will be used.

Loopback address:External host:
only IPv4127.0.0.10.0.0.0
only IPv6::1::
IPv4 and IPv6localhostall

I will change back the port number of the "voc" server to 50000: "monetdbd set port=50000 DBfarm".

Snapshot Settings

In the blog post "link", we have saw how to create a snapshot of our database. We have used stored procedure for that. The same is possible from the command line. Settings for command line snapshots are in monetdbd properties file.

"snapshotdir" is a directory where our snapshot will be made. If this is not set, then we will not be able to make a snapshot.monetdbd set snapshotdir=/home/fffovde/Desktop DBfarm
"snapshotcompression" is a level of compression. We can choose between "tar.lz4,tar.bz2,tar.gz,tar.xz". This is optional setting.monetdbd set snapshotcompression=.tar.lz4 DBfarm
There is a dot before "tar".

I will set these changes, and then we will create a snapshot. We will create snapshot with the "monetdb" console program.

monetdb snapshot create DBfarmDB
This will create ".tar.lz4" snapshot on our Desktop. Inside of it we will get "DBfarmDB" database.

We can unset a property by providing an empty value:monetdbd set snapshotdir= DBfarm

Remote Control Settings

We can call and control "monetdbd" on remote computer. I will allow remote control for "monetdbd" on the "voc" server.monetdbd set control=true DBfarm
We also have to provide SHA512 password.monetdbd set passphrase=zebra DBfarm
Our password will be written to ".merovingian_properties" file. It will be written as 128-digits hexadecimal number with the prefix "{SHA512}".
If we only have hashed password, then we can provide it directly.monetdbd set passphrase={SHA512}9df3424fec DBfarm
We have to restart monetdbd for these settings to become functional. We will also start DBfarmDB database.monetdbd stop DBfarm
monetdbd start DBfarm
monetdb start DBfarmDB

Remote control is established by using monetdb.

I will run this command from the green server, with monetdb. Now we can control monetdbd on the "voc" server remotely.monetdb -h 192.168.100.124 -p 50000 -P zebra stop DBfarmDB

   

Remote Discovery Settings

Monetdb can "shout" its existence on the local network by UDP broadcast. In regular intervals Monetdb server will tell other computers on the LAN about its existence. This is how we can discover all of the monetdb servers in our network.

I will tell "voc" server to broadcast its existence every 300 seconds. It will use default port 50000.

monetdbd set discovery=true DBfarm
monetdbd set discoveryttl=300 DBfarm
I will restart DBfarm.  

monetdbd stop DBfarm
monetdbd start DBfarm
                                            

On the green server I will run this command.
monetdb discover
We can see here that on the "voc" server (hostname fffOvdeKomp), we have database "DBfarmDB". We also see local green database. So, we get a list of all of the available databases.

Start and Stop Settings

monetdbd stop DBfarm monetdbd start -n DBfarmI will stop "DBfarm", and then I will start it with a "-n" switch.  This will leave "monetdbd" daemon in the foreground and will not return control to the shell (we will not get prompt in the next line). This is useful for debugging.

We can close "monetdbd" using "ctrl+C" and then Enter.

There is "monetdbd" setting "exittimeout" with the default value 60.


This means that "monetdbd" will ask "mserver" to shutdown politely with the SIGTERM command ( "mserver" can refuse ).
If the "mserver" is not closed after 60+5 seconds, then "monetdbd" will send SIGKILL signal, and "mserver" will be violently shut down.

There are three possible values for "exittimeout".POSITIVE VALUE:
"Mserver", if alive, will be killed 5 seconds after the "exittimeout".
NEGATIVE VALUE:
"Mserver" will only receive SIGTERM signal. It will never be killed.
ZERO:
"Monetdbd" will close, but "Mservers" will not. We should not use this.

When we wait for a really long query to finish, our client will not communicate with the server. Firewall can interpret that like an inactive connection, so it will close the connection. We want to avoid this, so we can use probe signal to keep connection alive.This is done with the "keepalive" setting, with the default value of 60 seconds.
# monetdbd set keepalive=60 DBfarm
This probe signal will be sent exactly 127 times. If "keepalive" time is 60 seconds, that means that are connection will be at least (127 x 60 seconds) ≈ 2 hours alive.

Monetdbd Return Value

Monetdbd will return "0" for successful action, and "1" for unsuccessful. If we try to start already started daemon, we will get an error.
monetdbd start DBfarm; echo $?

Monetdbd Process Signals

I will start the monetdbd again, and then I will read process ID of our daemon.
monetdbd start DBfarm
cat ./pidy.pid

Let's assume that I want to use some other file, instead of "logy.log", for logging, but I do not want to restart monetdbd. I will do that in the two steps.

monetdbd set logfile=/home/fffovde/Desktop/DBfarm/merovingian.log DBfarm
kill -HUP  "$(cat ./pidy.pid)"
I am changing the file for my log.
I am sending SIGHUP signal to monetdbd.

Now the daemon will stop using "logy.log" file, and will use "merovingian.log" file. We didn't have to restart the daemon.

We can kill monetdbd with a signal. All of the signals bellow work the same.
kill -TERM "$(cat "pidy.pid")"
kill -INT  "$(cat "pidy.pid")"
kill -QUIT "$(cat "pidy.pid")"
They will ask our "mserver5" process to gently end (it can refuse). Signals: SIGTERM
SIGINT
SIGQUIT

0550 File and Hot Backup in MonetDB

File Backup

Our database ( "voc" ) is just a folder inside of the DBfarm folder. We can backup our database by making a copy of the "voc" folder. The problem is that before doing this we have to wait for the users to finish their queries, and then to prevent them from running new queries.

Disconnecting Users

We can check other users by reading form the sessions table.
– Column "idle" is timestamp for session "1". This is the moment when that session finished running its last statement.
– Column "idle" is null for session "0". In this session we are running current "SELECT … FROM sessions" statement, so this session is busy.
select sessionid, username, idle, login from sys.sessions;

If we estimate that database is not overloaded, then we can assume that all of the existing queries will end in 30 seconds. We will let existing transaction to run 30 seconds more, and then we will close all the connections.  

call sys.shutdown(30);
This procedure can be called only by the administrator.
If the "voc" user try to run a query, he will get "unexpected end of file" and his session will end. He will not be allowed to login again.

Shutdown procedure is using SIGTERM command. This procedure will ask database to shutdown, but the database can refuse to do it. In such circumstances we can use the second argument "call sys.shutdown(30, true);". Now we use SIGKILL command, and database will not be able to refuse shutting down.

After calling "sys.shutdown()", admin will still be able to run statements, not only during 30 seconds, but until he decides to leave the session.

If admin quit the session, he will not be able to log into the server again.
At this moment, our database is not locked, but no one can connect to it.

 
monetdb status -l voc


Connection to this database will be possible only after the restart. But, after stopping the database, we want to prevent users from starting it again. We will lock our database so that only admin can start it again.  
monetdb lock voc
The last step is to fully stop our database:  
monetdb stop voc

Restoring Backuped File

We now just have to create a copy
of the database folder. This copy
can be placed on the same computer or we can move it to some other computer.
We can now release the new database and we can start it.
monetdb release vocCopy
mclient -u monetdb -d vocCopy

Cleaning Up After File Backup

We will quit and stop database vocCopy.
quit
monetdb stop vocCopy

Now we can delete this vocCopy database:
monetdb destroy vocCopy

Original "voc" database is still in the maintenance mode. We will release it with "monetdb release voc".

Hot Backup of a Database

A hot backup is the same as a file backup, but we don't have to stop the database. We'll make a copy of the database files while our database is still running.

Local Copy

As admin, we will call "hot_snapshot" procedure. The result will be our "voc" folder packed inside of the "tar" format.
mclient -u monetdb -d voc    #password monetdb
call sys.hot_snapshot('/home/fffovde/Desktop/backup.tar');
It is also possible to create compressed tar file. We can use ".lz4, .gz, .bz2, .xz" compression.  
call sys.hot_snapshot('/home/fffovde/Desktop/backup.tar.lz4');

We can grant privileges of using our procedures to any user.grant execute on procedure sys.hot_snapshot(string) to voc;
grant execute on procedure sys.shutdown(tinyint) to voc;

Remote Copy

We will now backup database from the green server. Green server allows external access from the network. I explained how to create green server in this blog post "link".We will open the green Database, on the green server.
monetdbd start /home/sima/monetdb/DBfarmG
monetdb start DatabaseG

From the "voc" server, we will login to the green server:
mclient -h 192.168.100.145 -p 50000 -u monetdb -d DatabaseG

We will call "hot_snapshot", but we will provide second argument "false". That means that backup shouldn't be created on the server, but on the client. Backup of the green database will appear on the "voc" server.
call sys.hot_snapshot('/home/fffovde/Desktop/backupGreen.tar', false);

Restoring Hot Backup

First, we will extract "voc" folder from the "tar" or compressed file.


I will place folder "DatabaseG" into farm folder on the "voc" server, and I will log into it.
mclient -u monetdb -d DatabaseG  #password monetdb
Now, we have green database on the "voc" server.

Cleaning Up After Hot Backup

I will quit green database session, and I will delete this database on the "voc" server.
quit
monetdb stop DatabaseG
monetdb destroy DatabaseG

0540 Backing up the MonetDB Database

Database Schema ( as a file )

A database schema is a list of all database objects and their attributes, written in a formal language. If we already have a ubiquitous formal language for creating database objects, SQL, then we should use SQL language for writing our schema.

If our server understands a particular dialect of SQL, it can read the list of SQL statements and, based on that, can create the entire specified database.

Specification is provided as a file with a "*.sql" extension.  

A schema is useful when we want to create a database according to a specification. But we want to back up our database, and for that we need a backup of the data in addition to the schema. We can include INSERT INTO statements with all the data, in a schema. Now we have a file that is an exact symbolic image of our database, and can be used as backup file.

We already used such file in the blog post about sample MonetDB database "link".   This file, with a data, can also be considered as a schema file, in the broader sense.

Qualities of the Good Schemas

Database objects should be created in the proper order. For example, tables should be created before we create views that are based on those tables. Here we can see recommended order of statements:a) CREATE SCHEMAS
b) CREATE SEQUENCES
c) CREATE TABLES
d) ALTER TABLES for FOREIGN KEYS
e) CREATE INDEXES
f) CREATE VIEWS
g) CREATE FUNCTIONS and PROCEDURES
h) CREATE TRIGGERS
i) INSERT INTO data
j) GRANT/REVOKE

When running schema file, we should wrap it into transaction to avoid incomplete creation.

Credentials

In this blog post "link", I have show you how to create a file with credentials, and we placed it into $HOME folder ( "/home/fffovde" ) on the "voc" server. If we have such file, we don't need to provide credentials, for MonetDB, every time we run some command in the shell.

You can notice bellow that I will run my Msqldump commands without providing credentials, by using such file. Users can only backup database objects they have access to, so with administrator account we will be able to backup everything.                                                                                                                                                                                                                                  

Msqldump

Msqldump is a console program used to create a schema from an existing MonetDB database. The resulting file can be used for backups or to migrate the database to another MonetDB server. With some manual tweaking of this file, we can use it to migrate database to a server other than MonetDB (perhaps Postgres or MySQL).

We should redirect the result of this app to the file on our computer.
msqldump -d voc > /home/fffovde/Desktop/voc_db.sql

If we don't, the long schema will go to stdin, and we usually don't want that.

By default, instead of the INSERT INTO statements, our schema will return "COPY FROM stdin" statements. This will make restoring of the database faster.
We can use "-e" switch to add NO ESCAPE clause to COPY statement.
msqldump -d voc -e > /home/fffovde/Desktop/voc_db.sql

For returning INSERT INTO statements, we should use "-N" switch.
msqldump -d voc -N > /home/fffovde/Desktop/voc_db.sql

Partial Backup of Database

Option "-f" means that we will only backup functions.
msqldump -d voc -f > /home/fffovde/Desktop/voc_db.sql
With option "-t", we can backup only one table. We should provide fully qualified name of a table, because default schema for administrator is sys ( we are logged in as administrator ).
msqldump -d voc -t voc.invoices > /home/fffovde/Desktop/voc_db.sql
With wild cards we can backup a set of tables with a similar name. This code bellow would return only tables "passengers, seafarers and soldiers".
msqldump -d voc -t voc.%ers > /home/fffovde/Desktop/voc_db.sql
With upper letter "-D voc", we will export database without data.
msqldump -D voc > /home/fffovde/Desktop/voc_db.sql

Location of A Schema File

We don't have to use redirection to a file. We can use option "-o".
msqldump -d voc -o /home/fffovde/Desktop/voc_db.sql

We can export our database to a folder. In that case switch "-O" is using upper letter.
msqldump -d voc -O /home/fffovde/Desktop/voc_db_folder  
If we export database to a folder, that folder will contain all of the DDL ( data definition language ) statements inside of the "dump.sql" file, but the data will be one separate CSV file for each table. CSV files will not have a header (because we have CREATE TABLE statements in the "dump.sql"), delimiter will be TAB.
Only when we use export to a folder, it is possible to use "-x" option. CSV files will be compressed to that file format. We can use "lz4,gz,bz2,xz".
msqldump -d voc -x lz4 -O /home/fffovde/Desktop/voc_db_folder

Credentials in the Command

I will now move my credentials to temporary folder, so from now we will have to provide credentials to Msqldump.
mv /home/fffovde/.monetdb /tmp/.monetdb
Name of our user now has to be provided with the switch "-u".
msqldump -u monetdb -d voc -o /home/fffovde/Desktop/voc_db.sql

Password is always provided separately.

I will now jump into green server. I explained how to create green server in this blog post "link". I am using this server because that server is set to accept external connections. Green server has IP address "192.168.100.145" and database DatabaseG. 
I will run this command on the green server, so I am just doing backup locally.
msqldump -h 192.168.100.145 -p 50000 -u monetdb -d DatabaseG -o /home/sima/Desktop/DatabaseG_db.sql
I will now run the same command from the "voc" server. This will backup "DatabaseG", from the green server, onto "voc" server.
msqldump -h 192.168.100.145 -p 50000 -u monetdb -d DatabaseG -o /home/fffovde/Desktop/DatabaseG_db.sql
In the previous blogpost "link", I have created purple server, which is protected with TLS. IP address of this server is 192.168.100.152, and its database is "tlsDB". I will now run this command on the "voc" server to backup database from the purple server.
msqldump -u monetdb -d 'monetdbs://192.168.100.152:50000/tlsDB?cert=/home/fffovde/selfsigned.crt' -o /home/fffovde/Desktop/tlsDB_db.sql

By providing IP address and the port number, we can backup database locally, or we can pull it from the remote server, even if it is protected with TLS.

Other Msqldump Options

We will run these commands on the "voc" server.

Whichever way we create a backup, at the beginning of the sql file, we will have a welcome message.
msqldump -u monetdb -d voc -o /home/fffovde/Desktop/voc_db.sql
We have to add "quiet" switch to avoid that message.
msqldump -q -u monetdb -d voc -o /home/fffovde/Desktop/voc_db.sql

If we use "-X" option, we will get message for every step msqldump is doing.

msqldump -X -u monetdb -d voc -o /home/fffovde/Desktop/voc_db.sql

By running this command in the shell, we will get a list
of all of the possible options in Msqldump.  

msqldump -?  



We can notice that longer syntax for any option exists, too:

msqldump --help


Usage: msqldump [ options ] [ dbname ]
-h hostname | --host=hostname    host to connect to
-p portnr   | --port=portnr      port to connect to
-u user     | --user=user        user id
-d database | --database=database  database to connect to
-o filename | --output=filename  write dump to filename
-O dir      | --outputdir=dir    write multi-file dump to dir
-x ext      | --compression=ext  compression method ext for multi-file dump
-f          | --functions        dump functions
-t table    | --table=table      dump a database table
-D          | --describe         describe database

-N          | --inserts          use INSERT INTO statements
-e          | --noescape         use NO ESCAPE

-q          | --quiet            don't print welcome message
-X          | --Xdebug           trace mapi network interaction
-?          | --help             show this usage message
--functions and --table are mutually exclusive
--output and --outputdir are mutually exclusive
--inserts and --outputdir are mutually exclusive
--compression only has effect with --outputdir

 Environment Variables

Environment variables are global variables that can be accessed by any process. They are key-value pairs that provide a way for processes to communicate with each other and the operating system.

We can list all the environment variables with "printenv".
For specific variables we just grep the result of "printenv":
printenv | grep home
If we know the name of variable, then we can read its value with "echo":
echo "$HOME"

My local time is set to Serbian.
echo "$LC_TIME"    #sr_RS
date # нед, 10. авг 2025.  09:48:22 CEST
For this session I will set it to English. Set means that we will create or update variable.
export LC_TIME=en_US.utf8
date # Sun 10 Aug 2025 09:56:16 AM CEST
With "unset", we will remove variable from the current session.
unset LC_TIME
echo "$LC_TIME"

# export LC_TIME=en_US.utf8
This command will set environment variable permanently if we write
it to the file "$HOME/.bashrc".
cat /home/fffovde/.bashrc
After that, we reload this file with:
source ~/.bashrc  #tilda is the same as $HOME, or "/home/fffovde" in this case.


DOTMONETDBFILE Environment Variable

If we want to make our Msqldump commands shorter, we can place values for some of the options into file. DOTMONETDBFILE is an environment variable with the path toward that file.

In the temporary folder I already have the file ".monetdb". It has username and password. This is the file we have moved before. I will add IP address and port number of the green server into this file.
nano /tmp/.monetdb
I will set my DOTMONETDBFILE environment variable to this file on the Desktop. This will be valid for one session.
export DOTMONETDBFILE=/tmp/.monetdb
user=monetdb
password=monetdb
host=192.168.100.145
port=50000

Now we can use Msqldump, to backup DatabaseG from the green computer, without providing all the credentials inside of the command. All the credentials are already stored in the ".monetdb" file.
msqldump -d DatabaseG -o /home/fffovde/Desktop/DatabaseG_db.sql

Default Folders for .monetdb File

DOTMONETDBFILE is only used if we want to keep ".monetdb" in non-standard folder. For standard folders, we don't need DOTMONETDBFILE. We just have to place the file into one of the standard folders, and this is how we used ".monetdb" file before the talk about environment variables.1) Current directory.
2) Path inside of the $XDG_CONFIG_HOME variable (if exists).
3) $HOME directory.

Msqldump will first look for a ".monetdb" in the current directory, then in $XDG_CONFIG_HOME, and at the end in the $HOME ( /home/fffovde ).

I will move ".monetdb" file into "/home/fffovde", and I will disable DOTMONETDBFILE:
mv /tmp/.monetdb /home/fffovde/
unset DOTMONETDBFILE
If we now try to run Msqldump, it will read ".monetdb" from the current folder and will work.
msqldump -d DatabaseG -o /home/fffovde/Desktop/DatabaseG_db.sql

Ignoring Default Folders

If we set DOTMONETDBFILE to empty string, Msqldump will ignore all of the configuration files, and will always ask for credentials.

# export DOTMONETDBFILE=""

From here you can download "voc" database sql dump file (schema file).

0530 Self Signed TLS with Stunnel for MonetDB

MonetDB can not encrypt its communication over the wire. This is leaved to specialized programs that are called "TLS Termination Proxy". In that way we get superb protection and flexibility provided by these tools.

Clients must also have support for TLS. "Mclient" and "Pymonetdb" have support for TLS.

TLS (Transport Layer Security ) is a critical internet protocol that encrypts and authenticates data in transit. TLS is widely tested and trusted, and it is used in HTTPS, email, and VPNs to prevent eavesdropping and tampering.

Most popular client tools (DBeaver, JDBC, Python libraries, etc.) already support TLS.

Purple Server

I will create totally new server for this blog post. I will call it "Purple Server ". This is the server on the new virtual machine.

I am currently the user "fff". I will add myself to "monetdb" group. This group was created after the installation of the MonetDB. People from this group can start a database.

sudo adduser fff monetdb

For this to apply, we have to log out, and then to log in.
Then, I will start monetdbd servis, and I will create a database "tlsDB".

sudo systemctl start monetdbd
monetdb create tlsDB
monetdb release tlsDB


This database will be created in the default folder "/var/monetdb5/dbfarm/".

This command bellow will make my server the full fledged systemctl controled daemon. That means that MonetDB will automatically start when I start my computer.
systemctl enable monetdbd

Self-Signed Certificate

In this tutorial we will create certificate ourselves, and we will use that certificate both on the server and on the client.

We will change default port of the MonetDB to 49999. This is because I want to use the port 50000 for TLS connection.monetdbd set port=49999 /var/monetdb5/dbfarm

Internal communication will use 127.0.0.1:49999, and external encrypted communication will use 192.168.100.152:50000. The only way to access MonetDB is through the TLS proxy server.

Creation of a Self-Signed Certificate

At the location "/etc/ssl/private" we have a folder that is only accessible to root. We can use any other folder, but it is important that folder has restrictive permissions, and that only admin can access it.
We can not access this folder with "sudo cd" because sudo doesn't work with built-in command "cd". We will first take the role of the root with "sudo -i", and then we will "cd /etc/ssl/private". That will make this folder our current folder.

Inside of this folder I will make a script.
touch /etc/ssl/private/SelfCertScript.sh

I will open this script in nano. I will paste the code.

nano SelfCertScript.sh

This is the version where we use IP address for the server.
I don't have a domain name, so I will use this version. =>
#!/bin/bash
set -euo pipefail

IP="192.168.100.152"
DAYS=90
KEY="selfsigned.key"
CRT="selfsigned.crt"

openssl req -x509 -newkey rsa:2048 -sha256 -nodes \
-keyout "$KEY" -out "$CRT" -days "$DAYS" \
-subj "/CN=$IP" \
-addext "subjectAltName=IP:$IP"

This is the version you use, if you have a domain name.  
#!/bin/bash
set -euo pipefail  

DOMAIN="dbhost.mymonetdb.org"
DAYS=90
KEY="selfsigned.key"
CRT="selfsigned.crt"  

openssl req -x509 -newkey rsa:2048 -sha256 -nodes \
-keyout "$KEY" -out "$CRT" -days "$DAYS" \
-subj "/CN=$DOMAIN" \
-addext " subjectAltName=DNS:$DOMAIN"
This is the version with the both.  
#!/bin/bash
set -euo pipefail  

IP="192.168.100.152"
DOMAIN="dbhost.mymonetdb.org"
DAYS=90 KEY="selfsigned.key"
CRT="selfsigned.crt"  

openssl req -x509 -newkey rsa:2048 -sha256 -nodes \
-keyout "$KEY" -out "$CRT" -days "$DAYS" \
-subj "/CN=$DOMAIN" \
-addext "subjectAltName=DNS:$DOMAIN,IP:$IP"

https://dbhost.mymonetdb.org
https://192.168.100.152
If we use IP in the script, then the clients must use IP. If we use domain name, then the clients must use domain name.
If we use both in the script, then the client tool can connect both with the IP or with the domain name.

We will now run our script.
bash ./SelfCertScript.sh
Two files will be created inside of the private folder. "selfsigned.key" is our secret that must be secure. This is why is important to use folder accessible only to root. "selfsigned.crt" is file that we will send to clients so that they can access our database.

We can also change permissions on files.
sudo chmod 600 /etc/ssl/private/selfsigned.*

Only root now have read, write rights on our files.

Using Some Other Folder

We can also keep our script, key and certificate in some other folder.
First, we will create a folder, and then we can make that folder only accessible to a root.
"chmod 700" means read/write/execute rights.
sudo mkdir /etc/ssl/private2
sudo chown root:root /etc/ssl/private2
sudo chmod 700 /etc/ssl/private2

Script Explanation

!/bin/bashShebang: run this script with the Bash shell (not sh, zsh, etc.).
-e (errexit)Exit immediately if any simple command returns a non-zero status (if there is an error).
-u (nounset)Treat the use of unset or empty variables as an error, and exit.
-o pipefailin a pipeline a | b | c, the pipeline's exit status is the first non-zero exit code among a, b, or c (instead of just c).

Let's dissect this line.openssl req -x509 -newkey rsa:2048 -sha256 -nodes -keyout "$KEY" -out "$CRT" -days "$DAYS" -subj "/CN=$DOMAIN" -addext "subjectAltName=DNS:$DOMAIN,IP:$IP"

openssl reqWe start self-signed certificate creation process.
-x509Create a self-signed certificate instead of a certificate signing request (CSR). CSR is for commercial certificates.
-newkey rsa:2048Generate a new RSA private key of 2048 bits.
-sha256Use SHA-256 as the hash algorithm.
-nodesPrivate key will not be encrypted. This is useful for automation, because there is no need for a password.
-keyout "$KEY_FILE"Private key will be saved to this file.
-out "$CRT_FILE"Certificate will be saved to this file.
-days "$EXPIRATION_DAYS"Certificate will be valid for 90 days. Clients usually do not trust certificates with a longer lifespan.
-subj "/CN=$IP"IP address (or domain name).
-addext "subjectAltName=IP:$IP"The same as above, but modern, because it can accept several IP addresses or domains.

Stunnel

"Stunnel" is the name of a program that we will use as a "TLS Termination Proxy". We now have to install it and configure it.

apt install stunnel4We install it. We are already the root, so we don't need "sudo".

stunnel -version

which stunnelWe find its installation folder.

cd /etc/stunnel
touch monetdb.conf
We'll go to stunnel installation folder. There we will create configuration file.

We'll open this file in nano, and we will paste the code.  

nano monetdb.conf
foreground = yes

cert = /etc/ssl/private/selfsigned.crt
key  = /etc/ssl/private/selfsigned.key

[monetdb]
accept  = 0.0.0.0:50000
connect = 127.0.0.1:49999

Stunnel Systemd Service

MonetDB service will always run automatically, because it is controlled by systemd. We want the same for stunnel. For stunnel to become a service, first we have to create configuration file for that service (systemd unit file).

First, we go to folder where we have to place systemd unit file.
cd /etc/systemd/system  

We create new file:
touch stunnel-monetdb.service  

We open this file in nano text editor:
nano stunnel-monetdb.service  

We paste our code into it:
[Unit]
Description=Stunnel TLS for MonetDB
After=network.target

[Service]
ExecStart=/usr/bin/stunnel /etc/stunnel/monetdb.conf
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
PrivateTmp=yes

[Install]
WantedBy=multi-user.target

Let's explain parts of this systemd unit file:

After=network.targetThis is a prerequisite. Stunnel should only be started after the network becomes functional during system startup.
ExecStart=/usr/bin/stunnel /etc/stunnel/monetdb.confStart stunnel program, with specified configuration file.
ExecReload=/bin/kill -HUP $MAINPID$MAINPID is process ID of the stunnel. "-HUP" means that stunnel should reload without restarting process.
Restart=on-failureRestart stunnel if it crushes.
PrivateTmp=yesInstead of /tmp and /var/tmp, service will use its own private temporary folders.
WantedBy=multi-user.targetDuring the startup, stunnel should start after some basic services already started.

Now we have to use this "stunnel-monetdb.service" file to register our new service with systemd:

sudo systemctl daemon-reexecUsed to refresh systemd.
sudo systemctl daemon-reloadReload all of the "systemd unit files". Now, it will include our "stunnel-monetdb.service" file.
sudo systemctl enable stunnel-monetdb.serviceTo have the service start automatically during reboot.
sudo systemctl start stunnel-monetdb.serviceStart our service immediately, without waiting for the first reboot. 

We can now check who is listening on port 50.000. So, port 50.000 is listened by stunnel.

sudo ss -tnlp | grep 50000

FYI: MonetDB Systemd Unit File

We can find location on MonetDB systemd unit file with this command:
systemctl status monetdbd
We can read its content like this:

systemctl cat monetdbd
[Unit]
Description=MonetDB database server daemon
Documentation=man:monetdbd https://www.monetdb.org/documentation/admin-guide/manpages/monetdbd/
After=network.target
[Service]
Type=forking
User=monetdb
Group=monetdb
ExecStart=/usr/bin/monetdbd-11.53.3 start /var/monetdb5/dbfarm
ExecStop=/usr/bin/monetdbd-11.53.3 stop /var/monetdb5/dbfarm
Restart=on-failure
PIDFile=/run/monetdb/merovingian.pid
PrivateDevices=no
ProtectSystem=full
ProtectHome=read-only
[Install]
WantedBy=multi-user.target

TLS Connection from the Local Computer

I will exit being root with "exit". I am running this on the purple server. URL is of the purple server.
mclient -d monetdbs://192.168.100.152:50000/tlsDB
It won't work because mclient doesn't have access to the certificate.

We will copy content of our certificate to $HOME folder. We need folder where mclient has access.sudo cat /etc/ssl/private/selfsigned.crt >$HOME/selfsigned.crt

We will use that folder to signal mclient where to find certificate.
mclient -d "monetdbs://192.168.100.152:50000/tlsDB?cert=$HOME/selfsigned.crt"

Now, we have access.

If you continue to get message "certificate verify failed", check your SelfCertScript.sh. Make sure that there are no invisible characters in it. After that recreate your certificate and copy it again for mclient. Try to restart your computer a few times. I had problems each time, and restarting of the computer helped.

TLS Connection from the Remote Computer

On the other virtual machine, I will place "selfsigned.crt" into home directory.
ls -alh selfsigned.crt

The same command will work here, too.

mclient -d "monetdbs://192.168.100.152:50000/tlsDB?cert=$HOME/selfsigned.crt"

How to use TLS with Pymonetdb

On this blog post "link", I have explained how to install pymonetdb. In the mean time there were some changes in the newest versions of Ubuntu.

If we follow that old blog post, we'll get an error.
sudo apt install python3-pip     #install pip
pip install pymonetdb                   #install pymonetdb

While we can install pip, we will not be able to install pymonetdb in this way. Ubuntu is trying to prevent us to install packages into global python context. Ubuntu wants us to use virtual environment. It doesn't want us to mess with global python environment.

We can still install into global environment in this way. This will only work for packages that are inside of the ubuntu repository.
sudo apt install python3-pymonetdb

I will install Spyder IDE:
sudo apt install spyder
I will open this GUI program
and I will run this code=>
import pymonetdb
connection = pymonetdb.connect("monetdbs://192.168.100.152:50000/tlsDB?cert=/home/fff/selfsigned.crt")
cursor = connection.cursor()
cursor.execute('SELECT 2')
print(cursor.fetchone())

We can also test the code in the command line.
python3 -c 'import pymonetdb;connection = pymonetdb.connect("monetdbs://192.168.100.152:50000/tlsDB?cert=/home/fff/selfsigned.crt" );cursor = connection.cursor();cursor.execute("SELECT 2");print(cursor.fetchone())'

We can also use syntax like this one:
connection = pymonetdb.connect("monetdbs://192.168.100.152:50000/tlsDB", cert="/home/fff/selfsigned.crt")

How to use TLS with ODBC

We will again use our Green server. We created that server in this blog post "link", and we installed ODBC drivers in it, in this other post "link".

In the mentioned post "0500 Proto_loaders, ODBC and COPY in MonetDB", we created the file "/etc/odbc.ini" with the credentials of the Blue server. We will now modify that file, so that it leads toward the Purple server, because we want to test TLS connection.

We'll change credentials, but we will add two more properties. We'll add TLS and CERT property.
FOR THE BLUE SERVER:
[DatabaseB]
Description = Connect to Blue Server
Driver = MonetDB
Server = 192.168.100.146
Port = 50000
Database = DatabaseB
User = monetdb
Password = monetdb
FOR THE PURPLE SERVER:
[tlsDB]
Description = Connect to Blue Server
Driver = MonetDB Server = 192.168.100.152
Port = 50000
Database = tlsDB
User = monetdb
Password = monetdb
TLS = ON
CERT = /home/sima/selfsigned.crt

Now it is easy to connect to stunnel through ODBC driver manager.

isql -v tlsdB

How to use TLS with JDBC

In this blog post "link", we downloaded JDBC driver/client to the green server. We will use that client again, but this time with the connection string for TLS.

java -jar /home/sima/Desktop/jdbcclient.jre8.jar -u monetdb -d "jdbc:monetdbs://192.168.100.152:50000/tlsDB?cert=/home/sima/selfsigned.crt"

We will also connect through DBeaver and JDBC. Before my explanation, it would be wise to read how to use DBeaver and JDBC without TLS. It is explained on this blog "link". I will not repeat here the whole story, just the difference. The difference is in the connection, we have to provide certificate.

We can create new connection (1), or we can edit the existing one (2).
We will fill the dialog with credentials for the Purple server (3). After that, we will go to the "Driver properties" tab (4). There we have to add our certificate. Now we can connect.

Including Certificate Inside of the URL

We can run this command on the purple and the green server, and we would get the same result:

openssl x509 -in /home/fff/selfsigned.crt -outform DER | sha256sum
openssl x509 -in /home/sima/selfsigned.crt -outform DER | sha256sum

This command will take our certificate, it would transform it into DER format, and then it would calculate SHA256 hash of that format in HEX digits.
b89c338234850f8def5d4612e6c868cc5f85fe22e6d6a6b5acf8a7d17a15d764 *stdin

For this demonstration, we just need the first 16 digits. So we would complete command like this:
openssl x509 -in /home/sima/selfsigned.crt -outform DER | openssl dgst -sha256 -r | cut -c1-16
b89c338234850f8d

We can now use those first 16 digits inside of our URL: mclient -d "monetdbs://192.168.100.152:50000/tlsDB?certhash=sha256:b89c338234850f8d"
This way we can also connect to MonetDB.

0520 CHECK, RETURNING and Other in MonetDB

Sample Table

CREATE TABLE tabProducts (
    Color VARCHAR(10),
    Size  VARCHAR(5),
    Qty   INT );
INSERT INTO tabProducts (Color, Size, Qty) VALUES
   ('Red',  'M',  10),
   ('Red',  'XXL', 10),
   ('Blue', 'XL', 30);

RETURNING Clause

Only SELECT statement returns some values. INSERT, UPDATE, DELETE just silently do their job, without any feedback.

If we execute statement:
UPDATE t1 SET Col1 = 'zzz' WHERE Id = 99;
We can afterward check rows that are updated:
SELECT * FROM t1 WHERE Id = 99;
If we could do both things with one statement that would simplify things and reduce the strain on a database.

INSERT INTO tabProducts (Color, Size, Qty)
VALUES ( 'Blue', 'S', 40 ),
       ( 'Blue', 'S', 50 )
RETURNING Color, Size, Qty;
This INSERT statement will return values.  RETURNING clause will show us all the values that we have entered.

We can do the same thing with DELETE and UPDATE. Delete will return deleted rows, and UPDATE will return updated rows.

DELETE FROM tabProducts
WHERE Qty = 50
RETURNING Size, Qty;
UPDATE tabProducts
SET Size = 'M' WHERE Qty = 40
RETURNING Size, Color, Qty;

We can use expressions in the RETURNING clause.

UPDATE tabProducts
SET Qty = 60 WHERE Qty = 40
RETURNING COUNT( * ), SUM( Qty );
DELETE FROM tabProducts
WHERE Qty = 60
RETURNING Qty + 15;

INSERT INTO tabProducts VALUES( 'Blue', 'M', 100 )
RETURNING Color, SUM( Qty )
GROUP BY Color;
This will not work. We can not use GROUP BY in the RETURNING clause. RETURNING clause must remain simple.

Referencing Columns by Their Position

SELECT Color, Size, qty
FROM tabProducts
ORDER BY 1, 2; -
-BY Color, qty
SELECT Color, SUM( qty )
FROM tabProducts
GROUP BY 1;  
BY Color

In MonetDB it is possible to
reference columns by their
position in the select clause.

Referencing Set of Columns with the Keyword ALL

SELECT Color, qty, COUNT( Size )
FROM tabProducts
GROUP BY ALL;    
–BY Color, qty
When using GROUP BY, we can only group by columns which are not aggregated. If we want to group by all of such columns, we can use "GROUP BY ALL".

SELECT Color, Size, qty
FROM tabProducts
ORDER BY ALL
–BY Color, Size, qty
We can use ALL in the ORDER BY clause. This is the same as "ORDER BY Color, Size, qty".MonetDB will accept this syntax, but there is a bug and this syntax does nothing. There will be no sorting.

Instead of the keyword ALL, we can use the star "*" sign.SELECT Color, qty, COUNT( Size )
FROM tabProducts
GROUP BY *;   
–BY Color, qty
SELECT Color, Size, qty
FROM tabProducts
ORDER BY *;  
–BY Color, Size, qty

IS [NOT] DISTINCT FROM

Anything compared with NULL will return NULL.SELECT 'null' = null;
SELECT null = null;
IS [NOT] DISTINCT FROM is a null-safe comparison operator.
This operator will always return TRUE or FALSE,
even if one of operands is NULL.

SELECT NULL IS DISTINCT FROM NULL;SELECT 'A' IS DISTINCT FROM NULL;
SELECT NULL IS NOT DISTINCT FROM NULL;  SELECT 'A' IS NOT DISTINCT FROM NULL;

CHECK Constraint

A check constraint is a type of rule which specifies a condition ( boolean expression )  that must be met by each row in a database table. This rule limits acceptable values for data.

In our table I will add check constraint that quantity column must have values less then 1. This will fail because the existing data is already bigger than 1.
ALTER TABLE tabProducts ADD CONSTRAINT "QtyLess1" CHECK (qty < 1);

If we change our condition, so that qty must be less than 100, then it will succeed.
ALTER TABLE tabProducts ADD CONSTRAINT "QtyLess100" CHECK (qty < 100);
After that, we will try to insert number 111 into qty column =>
INSERT INTO
tabProducts( Color, Size, Qty )
VALUES ( 'Blue', 'XL', 111 );

This will fail because of the constraint (111>100).

UPDATE tabProducts SET qty = 111 WHERE qty = 30;
This also mean that we can not update the value in the qty column to a value that is bigger than 100.

We can only insert values that are smaller then 100.
UPDATE tabProducts SET qty = 99 WHERE qty = 30;
SELECT * FROM tabProducts;

How to Add Check Constraint?

We can add several constraints on the same column. We now have two constraints, that "qty > 0" and "qty < 100".
ALTER TABLE tabProducts ADD CONSTRAINT "QtyGrt0" CHECK (qty > 0);
That is not efficient. It is much better to add both constraints with one statement. We can connect conditions with AND, OR.
ALTER TABLE tabProducts ADD CONSTRAINT QtyConstraints CHECK (qty > 0 AND qty < 100);

Constraints can combine several columns in the requirement expression:
ALTER TABLE tabProducts ADD CONSTRAINT CheckColorSize CHECK ( Color = 'Red' OR Size = 'XL' );

We can add constraint during the creation of a table.
CREATE TABLE tabOrders ( Price INT, Qty INT, CONSTRAINT ValidOrders CHECK ( Price > 10 AND Qty < 10 )  );

During table creation, we can add a constraint that is at the single column level. The server will provide a default name for such a constraint.CREATE TABLE tabOneColumn ( Col1 CHAR, Col2 INT CHECK ( Col2 < 999 )  );
Default name consists of the table name, the column name, and the keyword "check". In this example, that would be "tabonecolumn_col2_check".

Instead of using the default name for a single-column CHECK constraint, it is much better to give the constraint a specific name.CREATE TABLE tabOneColumnNamed
( Col1 INT CONSTRAINT Col1Constraint CHECK ( Col1 < 999 )  );

Limitations on CHECK Constraints

CHECK constraint can only refer to one row of a table. We can not use aggregation functions, because that would break such limitation.
ALTER TABLE tabProducts ADD CONSTRAINT OverFlow CHECK ( SUM( Qty ) < 1000 );
This also means that a CHECK constraint cannot refer to tables other than the table on which it is defined.

A simple check on a single column has minimal impact on performance. We should avoid complex check conditions.

CHECK Constraints and Nulls

INSERT INTO
tabOneColumn ( Col1, Col2 )
VALUES ( null, null );
This INSERT statement will always succeed. CHECK constraints can not check null values.
SELECT * FROM tabOneColumn;

This CHECK constraint will always return null, because nothing can be compared with a null.
ALTER TABLE tabOneColumn ADD CONSTRAINT ZZZ CHECK ( Col1 <> null );
CHECK will prevent entry of a row, only if condition is returning FALSE, so this constraint is useless.
INSERT INTO tabOneColumn ( Col1, Col2 ) VALUES ( null, 33 );
SELECT * FROM tabOneColumn;

How to Delete CHECK Constraints?

We can find our CHECK constraint in the view "Information_schema.check_constraint".
SELECT * FROM information_schema.check_constraints WHERE table_name = 'tabonecolumn';

We can delete constraints like this:
– We can delete just the CHECK constraint.
– We can delete the whole table.

ALTER TABLE tabonecolumn DROP CONSTRAINT zzz; DROP TABLE tabOneColumnNamed;
Now, our constraints are gone.

SELECT * FROM information_schema.check_constraints WHERE table_name = 'tabOneColumnNamed' or constraint_name = 'zzz';

LIKE and ILIKE Operators

LIKE is a pattern matching operator. It can help us to find patterns in a text. LIKE is based on two wildcard characters. Percent sign "%" will replace any set of consecutive characters. Underscore "_" will replace exactly one character. Here are some examples:

SELECT 'zzz' LIKE '%';          –true
SELECT 'zzz' LIKE '___';   –true
SELECT 'Azz' LIKE 'A%';    –true
SELECT 'AzzA' LIKE '%zz%';    –true
SELECT 'AzzA' LIKE 'A__A';   –true
SELECT 'Azz' LIKE 'Azz_';   –false

ILIKE is case insensitive version.SELECT 'fff' LIKE 'F_F';   –false  SELECT 'fff' ILIKE 'F_F';   –true

We can negate LIKE with NOT.SELECT 'M' NOT LIKE 'M';   –false  SELECT 'M' LIKE 'M';   –true 

Default escape character is backslash "\".SELECT '%_' LIKE '\%\_';  –true
We can declare any other character to be ESCAPE sign.SELECT '%_' LIKE '#%#_' ESCAPE '#';  -true

Function regexp_replace

For more complex patterns, we can use the regexp_replace function. This function accepts the original string, a search pattern, and a string that will replace the found pattern.We will replace "m", followed by two letters, with the "z".

SELECT regexp_replace( 'maaam', 'm.{2}', 'z' );

This function is case sensitive,
except if we use the forth,
optional argument.
SELECT regexp_replace( 'maaam', 'M.{2}', 'z' );
— false
SELECT regexp_replace( 'maaam', 'M.{2}', 'z', 'i' );
–true

This function also accepts other modifiers (flags). I tested that it will accept "m,i,s,x,xx". Flag "xx" is the same as "x".

It is interesting that it will not accept flag "g". It seems that this modifier is constanly turned on.SELECT regexp_replace( 'SSS', 'S', 'P' );

SELECT regexp_replace('first\nfirst', '^first', '*', 'm') AS result;
SELECT regexp_replace('a\nb', 'a.b', 'X', 's') AS result;
SELECT regexp_replace('abc123', ' 1 2 3 ', '', 'x') AS result;
Here are examples that you can try with and without modifier (flag).

SELECT regexp_replace('Prisca Gbaguidi', '\\w+\\s\\w+', 'Mireille Gbaguidi');
–Mireille Gbaguidi
Backslashes have to be escaped.
SELECT regexp_replace('Prisca Gbaguidi', '(\\w+)(\\s)(\\w+)', '\\3 \\1');
Gbaguidi Prisca
Regex with numbered capturing groups.

TRUNCATE and SERIAL Data Type

We will examine how TRUNCATE can affect the SERIAL data type.
CREATE TABLE Students ( ID SERIAL, Name VARCHAR(10) );
INSERT INTO students(name) VALUES ('Wiremu'), ('Hinewai');
SELECT * FROM Students;

We will truncate this table, but we will continue sequence.
TRUNCATE students CONTINUE IDENTITY;
INSERT INTO students(name) VALUES ('Wiremu'), ('Hinewai');
SELECT * FROM Students;

Default behavior is to restart sequence.
TRUNCATE students RESTART IDENTITY;
INSERT INTO students(name) VALUES ('Wiremu'), ('Hinewai');
SELECT * FROM Students;